query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
0f3b88a34e7a921f1d5f261111105a97
|
From micro to macro: data driven phenotyping by densification of longitudinal electronic medical records
|
[
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "01835769f2dc9391051869374e200a6a",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
}
] |
[
{
"docid": "a3cf141ce82d39f8368e4465fc01c0c5",
"text": "Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. However, one necessary ingredient for natural interaction is still missing - emotions. This paper describes the problem of bimodal emotion recognition and advocates the use of probabilistic graphical models when fusing the different modalities. We test our audio-visual emotion recognition approach on 38 subjects with 11 HCI-related affect states. The experimental results show that the average person-dependent emotion recognition accuracy is greatly improved when both visual and audio information are used in classification",
"title": ""
},
{
"docid": "77b1507ce0e732b3ac93d83f1a5971b3",
"text": "Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technology for high data rate communication system. The basic principle of OFDM i s to divide the available spectrum into parallel channel s in order to transmit data on these channels at a low rate. The O FDM concept is based on the fact that the channels refe rr d to as carriers are orthogonal to each other. Also, the fr equency responses of the parallel channels are overlapping. The aim of this paper is to simulate, using GNU Octave, an OFD M transmission under Additive White Gaussian Noise (AWGN) and/or Rayleigh fading and to analyze the effects o f these phenomena.",
"title": ""
},
{
"docid": "1ef1e20f24fa75b40bcc88a40a544c5b",
"text": "Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. Monitoring grid resources is a lively research area given the challenges and manifold applications. The aim of this paper is to advance the understanding of grid monitoring by introducing the involved concepts, requirements, phases, and related standardisation activities, including Global Grid Forum’s Grid Monitoring Architecture. Based on a refinement of the latter, the paper proposes a taxonomy of grid monitoring systems, which is employed to classify a wide range of projects and frameworks. The value of the offered taxonomy lies in that it captures a given system’s scope, scalability, generality and flexibility. The paper concludes with, among others, a discussion of the considered systems, as well as directions for future research. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "24e380a79c5520a4f656ff2177d43dd7",
"text": "a r t i c l e i n f o Social media have increasingly become popular platforms for information dissemination. Recently, companies have attempted to take advantage of social advertising to deliver their advertisements to appropriate customers. The success of message propagation in social media depends greatly on the content relevance and the closeness of social relationships. In this paper, considering the factors of user preference, network influence , and propagation capability, we propose a diffusion mechanism to deliver advertising information over microblogging media. Our experimental results show that the proposed model could provide advertisers with suitable targets for diffusing advertisements continuously and thus efficiently enhance advertising effectiveness. In recent years, social media, such as Facebook, Twitter and Plurk, have flourished and raised much attention. Social media provide users with an excellent platform to share and receive information and give marketers a great opportunity to diffuse information through numerous populations. An overwhelming majority of mar-keters are using social media to market their businesses, and a significant 81% of these marketers indicate that their efforts in social media have generated effective exposure for their businesses [59]. With effective vehicles for understanding customer behavior and new hybrid elements of the promotion mix, social media allow enterprises to make timely contact with the end-consumer at relatively low cost and higher levels of efficiency [52]. Since the World Wide Web (Web) is now the primary message delivering medium between advertisers and consumers, it is a critical issue to find the best way to utilize on-line media for advertising purposes [18,29]. The effectiveness of advertisement distribution highly relies on well understanding the preference information of the targeted users. However, some implicit personal information of users, particularly the new users, may not be always obtainable to the marketers [23]. As users know more about their friends than marketers, the relations between the users become a natural medium and filter for message diffusion. Moreover, most people are willing to share their information with friends and are likely to be affected by the opinions of their friends [35,45]. Social advertising is a kind of recommendation system, of sharing information between friends. It takes advantage of the relation of users to conduct an advertising campaign. In 2010, eMarketer reported that 90% of consumers rely on recommendations from people they trust. In the same time, IDG Amplify indicated that the efficiency of social advertising is greater than the traditional …",
"title": ""
},
{
"docid": "8fabb9fe465fe70753fe4f035e4513f1",
"text": "Gait energy images (GEIs) and its variants form the basis of many recent appearance-based gait recognition systems. The GEI combines good recognition performance with a simple implementation, though it suffers problems inherent to appearance-based approaches, such as being highly view dependent. In this paper, we extend the concept of the GEI to 3D, to create what we call the gait energy volume, or GEV. A basic GEV implementation is tested on the CMU MoBo database, showing improvements over both the GEI baseline and a fused multi-view GEI approach. We also demonstrate the efficacy of this approach on partial volume reconstructions created from frontal depth images, which can be more practically acquired, for example, in biometric portals implemented with stereo cameras, or other depth acquisition systems. Experiments on frontal depth images are evaluated on an in-house developed database captured using the Microsoft Kinect, and demonstrate the validity of the proposed approach.",
"title": ""
},
{
"docid": "ec189ac55b64402d843721de4fc1f15c",
"text": "DroidMiner is a new malicious Android app detection system that uses static analysis to automatically mine malicious program logic from known Android malware. DroidMiner uses a behavioral graph to abstract malware program logic into a sequence of threat modalities, and then applies machine-learning techniques to identify and label elements of the graph that match harvested threat modalities. Once trained on a mobile malware corpus, DroidMiner can automatically scan a new Android app to (i) determine whether it contains malicious modalities, (ii) diagnose the malware family to which it is most closely associated, and (iii) precisely characterize behaviors found within the analyzed app. While DroidMiner is not the first to attempt automated classification of Android applications based on Framework API calls, it is distinguished by its development of modalities that are resistant to noise insertions and its use of associative rule mining that enables automated association of malicious behaviors with modalities. We evaluate DroidMiner using 2,466 malicious apps, identified from a corpus of over 67,000 third-party market Android apps, plus an additional set of over 10,000 official market Android apps. Using this set of real-world apps, DroidMiner achieves a 95.3% detection rate, with a 0.4% false positive rate. We further evaluate DroidMiner’s ability to classify malicious apps under their proper family labels, and measure its label accuracy at 92%.",
"title": ""
},
{
"docid": "f29b8c75a784a71dfaac5716017ff4f3",
"text": "The objective of this paper is to design a multi-agent system architecture for the Scrum methodology. Scrum is an iterative, incremental framework for software development which is flexible, adaptable and highly productive. An agent is a system situated within and a part of an environment that senses the environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future (Franklin and Graesser, 1996). To our knowledge, this is first attempt to include software agents in the Scrum framework. Furthermore, our design covers all the stages of software development. Alternative approaches were only restricted to the analysis and design phases. This Multi-Agent System (MAS) Architecture for Scrum acts as a design blueprint and a baseline architecture that can be realised into a physical implementation by using an appropriate agent development framework. The development of an experimental prototype for the proposed MAS Architecture is in progress. It is expected that this tool will provide support to the development team who will no longer be expected to report, update and manage non-core activities daily.",
"title": ""
},
{
"docid": "84dd3682a7cd1ea88b6d6588e46078ad",
"text": "OBJECTIVES\nThe purpose of this exploratory study was to see if meaning in life is associated with mortality in old age.\n\n\nMETHODS\nInterviews were conducted with a nationwide sample of older adults (N = 1,361). Data were collected on meaning in life, mortality, and select control measures.\n\n\nRESULTS\nThree main findings emerged from this study. First, the data suggest that older people with a strong sense of meaning in life are less likely to die over the study follow-up period than those who do not have a strong sense of meaning. Second, the findings indicate that the effect of meaning on mortality can be attributed to the potentially important indirect effect that operates through health. Third, further analysis revealed that one dimension of meaning-having a strong sense of purpose in life--has a stronger relationship with mortality than other facets of meaning. The main study findings were observed after the effects of attendance at religious services and emotional support were controlled statistically.\n\n\nDISCUSSION\nIf the results from this study can be replicated, then interventions should be designed to help older people find a greater sense of purpose in life.",
"title": ""
},
{
"docid": "c7d3381b32e6a6bbe3ea9d9b870ce1d2",
"text": "Software defect prediction plays an important role in improving software quality and it help to reducing time and cost for software testing. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. The ability of a machine to improve its performance based on previous results. Machine learning improves efficiency of human learning, discover new things or structure that is unknown to humans and find important information in a document. For that purpose, different machine learning techniques are used to remove the unnecessary, erroneous data from the dataset. Software defect prediction is seen as a highly important ability when planning a software project and much greater effort is needed to solve this complex problem using a software metrics and defect dataset. Metrics are the relationship between the numerical value and it applied on the software therefore it is used for predicting defect. The primary goal of this survey paper is to understand the existing techniques for predicting software defect.",
"title": ""
},
{
"docid": "38fccd4fd4a18c4c4bc9575092a24a3e",
"text": "We investigate the problem of human identity and gender recognition from gait sequences with arbitrary walking directions. Most current approaches make the unrealistic assumption that persons walk along a fixed direction or a pre-defined path. Given a gait sequence collected from arbitrary walking directions, we first obtain human silhouettes by background subtraction and cluster them into several clusters. For each cluster, we compute the cluster-based averaged gait image as features. Then, we propose a sparse reconstruction based metric learning method to learn a distance metric to minimize the intra-class sparse reconstruction errors and maximize the inter-class sparse reconstruction errors simultaneously, so that discriminative information can be exploited for recognition. The experimental results show the efficacy of our approach.",
"title": ""
},
{
"docid": "3f5097b33aab695678caca712b649a8f",
"text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.",
"title": ""
},
{
"docid": "e50ba614fc997f058f8d495b59c18af5",
"text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.",
"title": ""
},
{
"docid": "50f09f5b2e579e878f041f136bafe07e",
"text": "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.",
"title": ""
},
{
"docid": "d6aa7df08694089a6e0e8030be374c20",
"text": "Human pluripotent stem cells (hPSCs) offer a unique platform for elucidating the genes and molecular pathways that underlie complex traits and diseases. To realize this promise, methods for rapid and controllable genetic manipulations are urgently needed. By combining two newly developed gene-editing tools, the TALEN and CRISPR/Cas systems, we have developed a genome-engineering platform in hPSCs, which we named iCRISPR. iCRISPR enabled rapid and highly efficient generation of biallelic knockout hPSCs for loss-of-function studies, as well as homozygous knockin hPSCs with specific nucleotide alterations for precise modeling of disease conditions. We further demonstrate efficient one-step generation of double- and triple-gene knockout hPSC lines, as well as stage-specific inducible gene knockout during hPSC differentiation. Thus the iCRISPR platform is uniquely suited for dissection of complex genetic interactions and pleiotropic gene functions in human disease studies and has the potential to support high-throughput genetic analysis in hPSCs.",
"title": ""
},
{
"docid": "8372f42c70b3790757f4f1d5535cebc1",
"text": "WiFi positioning system has been studying in many fields since the past. Recently, a lot of mobile companies are competing for smartphones. Accordingly, this paper proposes an indoor WiFi positioning system using Android-based smartphones.",
"title": ""
},
{
"docid": "b6b5afb72393e89c211bac283e39d8a3",
"text": "In order to promote the use of mushrooms as source of nutrients and nutraceuticals, several experiments were performed in wild and commercial species. The analysis of nutrients included determination of proteins, fats, ash, and carbohydrates, particularly sugars by HPLC-RI. The analysis of nutraceuticals included determination of fatty acids by GC-FID, and other phytochemicals such as tocopherols, by HPLC-fluorescence, and phenolics, flavonoids, carotenoids and ascorbic acid, by spectrophotometer techniques. The antimicrobial properties of the mushrooms were also screened against fungi, Gram positive and Gram negative bacteria. The wild mushroom species proved to be less energetic than the commercial sp., containing higher contents of protein and lower fat concentrations. In general, commercial species seem to have higher concentrations of sugars, while wild sp. contained lower values of MUFA but also higher contents of PUFA. alpha-Tocopherol was detected in higher amounts in the wild species, while gamma-tocopherol was not found in these species. Wild mushrooms revealed a higher content of phenols but a lower content of ascorbic acid, than commercial mushrooms. There were no differences between the antimicrobial properties of wild and commercial species. The ongoing research will lead to a new generation of foods, and will certainly promote their nutritional and medicinal use.",
"title": ""
},
{
"docid": "1b556f4e0c69c81780973a7da8ba2f8e",
"text": "We explore ways of allowing for the offloading of computationally rigorous tasks from devices with slow logical processors onto a network of anonymous peer-processors. Recent advances in secret sharing schemes, decentralized consensus mechanisms, and multiparty computation (MPC) protocols are combined to create a P2P MPC market. Unlike other computational ”clouds”, ours is able to generically compute any arithmetic circuit, providing a viable platform for processing on the semantic web. Finally, we show that such a system works in a hostile environment, that it scales well, and that it adapts very easily to any future advances in the complexity theoretic cryptography used. Specifically, we show that the feasibility of our system can only improve, and is historically guaranteed to do so.",
"title": ""
},
{
"docid": "5bde20f5c0cad9bf14bec276b59c9054",
"text": "Energy conversion of sunlight by photosynthetic organisms has changed Earth and life on it. Photosynthesis arose early in Earth's history, and the earliest forms of photosynthetic life were almost certainly anoxygenic (non-oxygen evolving). The invention of oxygenic photosynthesis and the subsequent rise of atmospheric oxygen approximately 2.4 billion years ago revolutionized the energetic and enzymatic fundamentals of life. The repercussions of this revolution are manifested in novel biosynthetic pathways of photosynthetic cofactors and the modification of electron carriers, pigments, and existing and alternative modes of photosynthetic carbon fixation. The evolutionary history of photosynthetic organisms is further complicated by lateral gene transfer that involved photosynthetic components as well as by endosymbiotic events. An expanding wealth of genetic information, together with biochemical, biophysical, and physiological data, reveals a mosaic of photosynthetic features. In combination, these data provide an increasingly robust framework to formulate and evaluate hypotheses concerning the origin and evolution of photosynthesis.",
"title": ""
},
{
"docid": "05fa9ab12a14f5624ab532c9c034bbb8",
"text": "This paper presents the design, implementation, characterization and recording results of a wireless, batteryless microsystem for neural recording on rat, with implantable grid electrode and 3-dimensional probe array. The former provides brain surface ECoG acquisition, while the latter achieves 3D extracellular recording in the 3D target volume of tissue. The microsystem addressed the aforementioned properties by combining MEMS neural sensors, low-power circuit designs and commercial chips into system-level integration.",
"title": ""
},
{
"docid": "e1bb6bcd75b14e970c461ef0b55dc9fe",
"text": "The aim of this study was to assess and compare the body image of breast cancer patients (n = 70) whom underwent breast conserving surgery or mastectomy, as well as to compare patients’ scores with that of a sample of healthy control women (n = 70). A secondary objective of this study was to examine the reliability and validity of the 10-item Greek version of the Body Image Scale, a multidimensional measure of body image changes and concerns. Exploratory and confirmatory factor analyses on the items of this scale resulted in a two factor solution, indicating perceived attractiveness, and body and appearance satisfaction. Comparison of the two surgical groups revealed that women treated with mastectomy felt less attractive and more self-conscious, did not like their overall appearance, were dissatisfied with their scar, and avoided contact with people. Hierarchical regression analysis showed that more general body image concerns were associated with belonging to the mastectomy group, compared to the cancer-free group of women. Implications for clinical practice and recommendations for future investigations are discussed.",
"title": ""
}
] |
scidocsrr
|
ee7d829c3f6394a656eb5e6d67739637
|
Efficient resampling methods for training support vector machines with imbalanced datasets
|
[
{
"docid": "df75c48628144cdbcf974502ea24aa24",
"text": "Standard SVM training has O(m3) time andO(m2) space complexities, where m is the training set size. It is thus computationally infeasible on very larg e data sets. By observing that practical SVM implementations onlyapproximatethe optimal solution by an iterative strategy, we scale up kernel methods by exploiting such “approximateness” in t h s paper. We first show that many kernel methods can be equivalently formulated as minimum en closing ball (MEB) problems in computational geometry. Then, by adopting an efficient appr oximate MEB algorithm, we obtain provably approximately optimal solutions with the idea of c re sets. Our proposed Core Vector Machine (CVM) algorithm can be used with nonlinear kernels a nd has a time complexity that is linear in m and a space complexity that is independent of m. Experiments on large toy and realworld data sets demonstrate that the CVM is as accurate as exi sting SVM implementations, but is much faster and can handle much larger data sets than existin g scale-up methods. For example, CVM with the Gaussian kernel produces superior results on th e KDDCUP-99 intrusion detection data, which has about five million training patterns, in only 1.4 seconds on a 3.2GHz Pentium–4 PC.",
"title": ""
}
] |
[
{
"docid": "b2418dc7ae9659d643a74ba5c0be2853",
"text": "MITJA D. BACK*, LARS PENKE, STEFAN C. SCHMUKLE, KAROLINE SACHSE, PETER BORKENAU and JENS B. ASENDORPF Department of Psychology, Johannes Gutenberg-University Mainz, Germany Department of Psychology and Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, UK Department of Psychology, Westfälische Wilhelms-University Münster, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Humboldt University Berlin, Germany",
"title": ""
},
{
"docid": "610629d3891c10442fe5065e07d33736",
"text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.",
"title": ""
},
{
"docid": "d9c9b9bdfa8333320097b5a4f97c8663",
"text": "This article describes the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (Anderson et al., 2004; Anderson & Lebiere, 1998) and its detailed application to the learning of algebraic symbol manipulation. The theory is applied to modeling the data from a study by Qin, Anderson, Silk, Stenger, & Carter (2004) in which children learn to solve linear equations and perfect their skills over a 6-day period. Functional MRI data show that: (a) a motor region tracks the output of equation solutions, (b) a prefrontal region tracks the retrieval of declarative information, (c) a parietal region tracks the transformation of mental representations of the equation, (d) an anterior cingulate region tracks the setting of goal information to control the information flow, and (e) a caudate region tracks the firing of productions in the ACT-R model. The article concludes with an architectural comparison of the competence children display in this task and the competence that monkeys have shown in tasks that require manipulations of sequences of elements.",
"title": ""
},
{
"docid": "1c6a589d2c74bd1feb3e98c21a1375a9",
"text": "UNLABELLED\nMinimally invasive approach for groin hernia treatment is still controversial, but in the last decade, it tends to become the standard procedure for one day surgery. We present herein the technique of laparoscopic Trans Abdominal Pre Peritoneal approach (TAPP). The surgical technique is presented step-by step;the different procedures key points (e.g. anatomic landmarks recognition, diagnosis of \"occult\" hernias, preperitoneal and hernia sac dissection, mesh placement and peritoneal closure) are described and discussed in detail, several tips and tricks being noted and highlighted.\n\n\nCONCLUSIONS\nTAPP is a feasible method for treating groin hernia associated with low rate of postoperative morbidity and recurrence. The anatomic landmarks are easily recognizable. The laparoscopic exploration allows for the treatment of incarcerated strangulated hernias and the intraoperative diagnosis of occult hernias.",
"title": ""
},
{
"docid": "40a9f094ae1c8da71c0e71aee5fd7fd8",
"text": "Distributed transactional storage is an important service in today's data centers. Achieving high performance without high complexity is often a challenge for these systems due to sophisticated consistency protocols and multiple layers of abstraction. In this paper we show how to combine two emerging technologies---Software-Defined Flash (SDF) and precise synchronized clocks---to improve performance and reduce complexity for transactional storage within the data center.\n We present a distributed transactional system (called MILANA) as a layer above a durable multi-version key-value store (called SEMEL) for read-heavy workloads within a data center. SEMEL exploits write behavior of SSDs to maintain a time-ordered sequence of versions for each key efficiently and durably. MILANA adds a variant of optimistic concurrency control above SEMEL's API to service read requests from a consistent snapshot and to enable clients to make fast local commit or abort decisions for read-only transactions.\n Experiments with the prototype reveal up to 43% lower transaction abort rates using IEEE Precision Time Protocol (PTP) vs. the standard Network Time Protocol (NTP). Under the Retwis benchmark, client-local validation of read-only transactions yields a 35% reduction in latency and 55% increase in transaction throughput.",
"title": ""
},
{
"docid": "fc7c7828428a4018a8aaddaff4eb5b3f",
"text": "Data mining is comprised of many data analysis techniques. Its basic objective is to discover the hidden and useful data pattern from very large set of data. Graph mining, which has gained much attention in the last few decades, is one of the novel approaches for mining the dataset represented by graph structure. Graph mining finds its applications in various problem domains, including: bioinformatics, chemical reactions, Program flow structures, computer networks, social networks etc. Different data mining approaches are used for mining the graph-based data and performing useful analysis on these mined data. In literature various graph mining approaches have been proposed. Each of these approaches is based on either classification; clustering or decision trees data mining techniques. In this study, we present a comprehensive review of various graph mining techniques. These different graph mining techniques have been critically evaluated in this study. This evaluation is based on different parameters. In our future work, we will provide our own classification based graph mining technique which will efficiently and accurately perform mining on the graph structured data.",
"title": ""
},
{
"docid": "15ce175cc7aa263ded19c0ef344d9a61",
"text": "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"title": ""
},
{
"docid": "979b0feaadefcf8494af4667cfe9a1ff",
"text": "We study fairness within the stochastic,multi-armed bandit (MAB) decision making framework. We adapt the fairness framework of “treating similar individuals similarly” [5] to this seing. Here, an ‘individual’ corresponds to an arm and two arms are ‘similar’ if they have a similar quality distribution. First, we adopt a smoothness constraint that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we dene the fairness regret, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on ompson sampling satises smooth fairness for total variation distance, and give an Õ((kT )2/3) bound on fairness regret. is complements prior work [12], which protects an on-average beer arm from being less favored. We also explain how to extend our algorithm to the dueling bandit seing. ACM Reference format: Yang Liu, Goran Radanovic, Christos Dimitrakakis, DebmalyaMandal, andDavid C. Parkes. 2017. Calibrated Fairness in Bandits. In Proceedings of FAT-ML, Calibrated Fairness in Bandits, September 2017 (FAT-ML17), 7 pages. DOI: 10.1145/nnnnnnn.nnnnnnn",
"title": ""
},
{
"docid": "8640cd629e07f8fa6764c387d9fa7c29",
"text": "We describe an evaluation of spoken dialogue strategies designed using hierarchical reinforcement learning agents. The dialogue strategies were learnt in a simulated environment and tested in a laboratory setting with 32 users. These dialogues were used to evaluate three types of machine dialogue behaviour: hand-coded, fully-learnt and semi-learnt. These experiments also served to evaluate the realism of simulated dialogues using two proposed metrics contrasted with ‘PrecisionRecall’. The learnt dialogue behaviours used the Semi-Markov Decision Process (SMDP) model, and we report the first evaluation of this model in a realistic conversational environment. Experimental results in the travel planning domain provide evidence to support the following claims: (a) hierarchical semi-learnt dialogue agents are a better alternative (with higher overall performance) than deterministic or fully-learnt behaviour; (b) spoken dialogue strategies learnt with highly coherent user behaviour and conservative recognition error rates (keyword error rate of 20%) can outperform a reasonable hand-coded strategy; and (c) hierarchical reinforcement learning dialogue agents are feasible and promising for the (semi) automatic design of optimized dialogue behaviours in larger-scale systems. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "51793d81dd923b59f764dcb4c8a0343f",
"text": "Augmented Reality (AR) systems which use optical tracking with fiducial marker for registration have had an important role in popularizing this technology, since only a personal computer with a conventional webcam is required. However, in most these applications, the virtual elements are shown only in the foreground a real element does not occlude a virtual one. The method presented enables AR environments based on fiducial markers to support mutual occlusion between a real element and many virtual ones, according to the elements position (depth) in the environment.",
"title": ""
},
{
"docid": "e939e98e090c57e269444ae5d503884b",
"text": "Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.",
"title": ""
},
{
"docid": "f0e21ea25c795f110d3677e51835c099",
"text": "Objective: To assess the use of the Mini-Nutritional Assessment (MNA) in elderly orthopaedic patients.Design: An observation study assessing the nutritional status of female orthopaedic patients.Setting: The orthopaedic wards of the Royal Surrey County Hospital.Subjects: Forty-nine female patients aged 60–103 y; dietary records were obtained for 41 subjects and 36 subjects gave a blood sample for biochemical analysis.Major outcome methods: MNA questionnaire, anthropometry, plasma albumin, transferrin, C-reactive protein (CRP) levels and dietary analyses.Results: The group as a whole had low mean values for body weight, albumin and transferrin and high CRP levels. In addition, the group had mean energy intakes well below the estimated average requirement (EAR) and mean intakes of vitamin D, magnesium, potassium, selenium and non-starch polysaccharides (NSP) were below the lower reference nutrient intakes (LRNI). The MNA screening section categorized 69% of the patients as requiring a full assessment (scored 11 or below), but for the purposes of the study the MNA was completed on all patients. The MNA assessment categorized 16% of the group as ‘malnourished’ (scored<17 points), 47% as ‘at risk’ (scored 17.5–23.5) and 37% as ‘well nourished’ (scored>23.5). Significant differences were found between the malnourished and well nourished groups for body weight (P<0.001), body mass index (BMI) (P<0.001), demiquet (P<0.001) and mindex (P<0.001). Mean values for energy and nutrient intakes showed a clear stepwise increase across the three groups for all nutrients except sodium, with significant differences for protein (P<0.05), carbohydrate (P<0.05), riboflavin (P<0.05) niacin (P<0.05), pyridoxine (P<0.05), folate (P<0.05), calcium (P<0.05), selenium (P<0.05), iron (P<0.05) and NSP (P<0.05) intakes. Stepwise multiple regression analysis indicated that anthropometric assessments were the most predictive factors in the total MNA score. The sensitivity and specificity of the MNA was assessed in comparison with albumin levels, energy intake and mindex. The sensitivity of the MNA classification of those scoring less than 17 points in comparison with albumin levels, energy intake and mindex varied from 27 to 57% and the specificity was 66–100%. This was compared with the sensitivity and specificity of using a score of less than 23.5 on the MNA to predict malnourished individuals. Using this cut-off the sensitivity ranged from 75 to 100%, but the specificity declined to between 37 and 50%.Conclusions: The results suggest that the MNA is a useful diagnostic tool in the identification of elderly patients at risk from malnutrition and those who are malnourished in this hospital setting.Sponsorship: Nestlé Clinical Nutrition, Croydon, Surrey.European Journal of Clinical Nutrition (2000) 54, 555–562",
"title": ""
},
{
"docid": "359d3e06c221e262be268a7f5b326627",
"text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.",
"title": ""
},
{
"docid": "a8a8656f2f7cdcab79662cb150c8effa",
"text": "As networks grow both in importance and size, there is an increasing need for effective security monitors such as Network Intrusion Detection System to prevent such illicit accesses. Intrusion Detection Systems technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on hybrid fuzzy logic and neural network. The key idea is to take advantage of different classification abilities of fuzzy logic and neural network for intrusion detection system. The new model has ability to recognize an attack, to differentiate one attack from another i.e. classifying attack, and the most important, to detect new attacks with high detection rate and low false negative. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data set.",
"title": ""
},
{
"docid": "f02b44ff478952f1958ba33d8a488b8e",
"text": "Plagiarism is an illicit act of using other’s work wholly or partially as one’s own in any field such as art, poetry literature, cinema, research and other creative forms of study. It has become a serious crime in academia and research fields and access to wide range of resources on the internet has made the situation even worse. Therefore, there is a need for automatic detection of plagiarism in text. This paper presents a survey of various plagiarism detection techniques used for different languages.",
"title": ""
},
{
"docid": "9e63ef15dc60ba475205fb15f5e9e912",
"text": "When study is spaced across sessions (versus massed within a single session), final performance is greater after spacing. This spacing effect may have multiple causes, and according to the mediator hypothesis, part of the effect can be explained by the use of mediator-based strategies. This hypothesis proposes that when study is spaced across sessions, rather than massed within a session, more mediators will be generated that are longer lasting and hence more mediators will be available to support criterion recall. In two experiments, participants were randomly assigned to study paired associates using either a spaced or massed schedule. They reported strategy use for each item during study trials and during the final test. Consistent with the mediator hypothesis, participants who had spaced (as compared to massed) practice reported using more mediators on the final test. This use of effective mediators also statistically accounted for some - but not all of - the spacing effect on final performance.",
"title": ""
},
{
"docid": "4ad3c199ad1ba51372e9f314fc1158be",
"text": "Inner lead bonding (ILB) is used to thermomechanically join the Cu inner leads on a flexible film tape and Au bumps on a driver IC chip to form electrical paths. With the newly developed film carrier assembly technology, called chip on film (COF), the bumps are prepared separately on a film tape substrate and bonded on the finger lead ends beforehand; therefore, the assembly of IC chips can be made much simpler and cheaper. In this paper, three kinds of COF samples, namely forming, wrinkle, and flat samples, were prepared using conventional gang bonder. The peeling test was used to examine the bondability of ILB in terms of the adhesion strength between the inner leads and the bumps. According to the peeling test results, flat samples have competent strength, less variation, and better appearance than when using flip-chip bonder.",
"title": ""
},
{
"docid": "2473b8e8deb0e4b79ac56d49a3894349",
"text": "Generalized hyperpigmentation (GHPT) of the skin may occur as a primary defect of pigmentation or in combination with other variable manifestations. It is visible in a number of diseases such as Addison’s disease (AD), haemochromatosis, porphyria cutanea tarda, scleroderma and neurofibromatosis, but it can also be associated with malignancy and the use of chemotherapeutics or it can be related to acanthosis nigricans in insulin resistance. Skin pigmentation depends on the differences in the amount, type and distribution of melanin produced during melanogenesis in skin melanocytes [1] and remains under the genetic control of more than 120 genes [2]. The most important one is the melanocortin 1 receptor (MC1R) gene [3] (OMIM ID: 155555) located on chromosome 16q24.3 and encoding for a 317-amino-acid G-protein coupled receptor. The MC1R receptor binds α-melanocyte-stimulating hormone (α-MSH) resulting in the activation of adenylyl cyclase, which produces cyclic adenosine monophosphate (cAMP). The increased cAMP concentration activates various intracellular molecular pathways, promotes melanin synthesis and increases the eumelanin to pheomelanin ratio [4]. MC1R receptor also binds ACTH, in this way contributing to the GHPT in AD. Upregulation of MC1R gene expression by UV radiation and α-MSH leads to enhancement of melanogenesis and melanin synthesis induction. Loss-of-function mutations in the MC1R gene are associated with fair skin, poor tanning, propensity to freckles and increased skin cancer risk due to a decrease in eumelanin synthesis and subsequently impaired protection against UV radiation [5-7]. To our knowledge, to date, no data are available considering gain-of-function mutations in the human MC1R gene which could lead to a constant activation of the MC1R receptor and subsequently cause GHPT. We present the case of a patient with a primary type of progressive GHPT in whom AD was suspected. An 11-year-old prepubertal girl with GHPT (Figures 1A-C) was born at term with normal birth weight and height and was first brought to our hospital at the age of 3 years with a suspicion of AD. She had a diffuse grey-brownish discoloration of the skin present since birth. Over the first few years of life she developed symmetrical hyperpigmentation most pronounced on her trunk and neck. Later, hyperpigmentation began to affect her hands and feet, and finally the whole body – sparing only the cheeks and finger tips. Her skin was very dry and atopic, and scars were not hyperCorresponding autor: Assoc. Prof. Marek Niedziela MD, PhD Department of Paediatric Endocrinology and Rheumatology Poznan University of Medical Sciences 27/33 Szpitalna St 60-572 Poznan, Poland Phone: +48 61 849 14 81 Fax: 48 61 848 02 91 E-mail: mniedzie@ump.edu.pl Letter to the Editor",
"title": ""
},
{
"docid": "1ead17fc0770233db8903db2b4f15c79",
"text": "The major objective of this paper is to examine the determinants of collaborative commerce (c-commerce) adoption with special emphasis on Electrical and Electronic organizations in Malaysia. Original research using a self-administered questionnaire was distributed to 400 Malaysian organizations. Out of the 400 questionnaires posted, 109 usable questionnaires were returned, yielding a response rate of 27.25%. Data were analysed by using correlation and multiple regression analysis. External environment, organization readiness and information sharing culture were found to be significant in affecting organ izations decision to adopt c-commerce. Information sharing culture factor was found to have the strongest influence on the adoption of c-commerce, followed by organization readiness and external environment. Contrary to other technology adoption studies, this research found that innovation attributes have no significant influence on the adoption of c-commerce. In terms of theoretical contributions, this study has extended previous researches conducted in western countries and provides great potential by advancing the understanding between the association of adoption factors and c-commerce adoption level. This research show that adoption studies could move beyond studying the factors based on traditional adoption models. Organizations planning to adopt c-commerce would also be able to applied strategies based on the findings from this research.",
"title": ""
}
] |
scidocsrr
|
4940099046fbc5959ea45783f2d95351
|
Adaptive rate stream processing for smart grid applications on clouds
|
[
{
"docid": "cf2a07644776aed828a4d6c8a51e240b",
"text": "Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response, Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los",
"title": ""
},
{
"docid": "24da291ca2590eb614f94f8a910e200d",
"text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.",
"title": ""
}
] |
[
{
"docid": "cbae4d5eb347a8136f34fb370d28f46b",
"text": "Available online 18 November 2013",
"title": ""
},
{
"docid": "eb7c34c4959c39acb18fc5920ff73dba",
"text": "Acoustic evidence suggests that contemporary Seoul Korean may be developing a tonal system, which is arising in the context of a nearly completed change in how speakers use voice onset time (VOT) to mark the language’s distinction among tense, lax and aspirated stops.Data from 36 native speakers of varying ages indicate that while VOT for tense stops has not changed since the 1960s, VOT differences between lax and aspirated stops have decreased, in some cases to the point of complete overlap. Concurrently, the mean F0 for words beginning with lax stops is significantly lower than the mean F0 for comparable words beginning with tense or aspirated stops. Hence the underlying contrast between lax and aspirated stops is maintained by younger speakers, but is phonetically manifested in terms of differentiated tonal melodies: laryngeally unmarked (lax) stops trigger the introduction of a default L tone, while laryngeally marked stops (aspirated and tense) introduce H, triggered by a feature specification for [stiff].",
"title": ""
},
{
"docid": "8b4243851ffaf5a673a5dbbb9ec34094",
"text": "Proposed cache compression schemes make design-time assumptions on value locality to reduce decompression latency. For example, some schemes assume that common values are spatially close whereas other schemes assume that null blocks are common. Most schemes, however, assume that value locality is best exploited by fixed-size data types (e.g., 32-bit integers). This assumption falls short when other data types, such as floating-point numbers, are common. This paper makes two contributions. First, HyComp -- a hybrid cache compression scheme -- selects the best-performing compression scheme, based on heuristics that predict data types. Data types considered are pointers, integers, floating-point numbers and the special (and trivial) case of null blocks. Second, this paper contributes with a compression method that exploits value locality in data types with predefined semantic value fields, e.g., as in the exponent and the mantissa in floating-point numbers. We show that HyComp, augmented with the proposed floating-point-number compression method, offers superior performance in comparison with prior art.",
"title": ""
},
{
"docid": "69561d0f42cf4aae73d4c97c1871739e",
"text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.",
"title": ""
},
{
"docid": "f8a89a023629fa9bcb2c3566b6817b0c",
"text": "In this paper, we propose a robust on-the-fly estimator initialization algorithm to provide high-quality initial states for monocular visual-inertial systems (VINS). Due to the non-linearity of VINS, a poor initialization can severely impact the performance of either filtering-based or graph-based methods. Our approach starts with a vision-only structure from motion (SfM) to build the up-to-scale structure of camera poses and feature positions. By loosely aligning this structure with pre-integrated IMU measurements, our approach recovers the metric scale, velocity, gravity vector, and gyroscope bias, which are treated as initial values to bootstrap the nonlinear tightly-coupled optimization framework. We highlight that our approach can perform on-the-fly initialization in various scenarios without using any prior information about system states and movement. The performance of the proposed approach is verified through the public UAV dataset and real-time onboard experiment. We make our implementation open source, which is the initialization part integrated in the VINS-Mono1.",
"title": ""
},
{
"docid": "9d33565dbd5148730094a165bb2e968f",
"text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.",
"title": ""
},
{
"docid": "59a480f6e29c0e4919a8e26393b8eb8d",
"text": "monitoring: a proof-of-concept of event-driven business activity management. Structured Abstract: Purpose. The purpose of this paper is to show how to employ complex event processing (CEP) for the observation and management of business processes. It proposes a conceptual architecture of BPM event producer, processor, and consumer and describes technical implications for the application with standard software in a perfect order scenario. Design/methodology/approach. The authors discuss business process analytics as the technological background. The capabilities of CEP in a BPM context are outlined an architecture design is proposed. A sophisticated proof-of-concept demonstrates its applicability. Findings. The results overcome the separation and data latency issues of process controlling, monitoring, and simulation. Distinct analyses of past, present, and future blur into a holistic real-time approach. The authors highlight the necessity for configurable event producer in BPM engines, process event support in CEP engines, a common process event format, connectors to visualizers, notifiers and return channels to the BPM engine. Research limitations. Further research will thoroughly evaluate the approach in a variety of business settings. New concepts and standards for the architecture's building blocks will be needed to improve maintainability and operability. Practical implications. Managers learn how CEP can yield insights into business processes' operations. The paper illustrates a path to overcome inflexibility, latency, and missing feedback mechanisms of current process modeling and control solutions. Software vendors might be interested in the conceptualization and the described needs for further development. Originality/value. So far, there is no commercial CEP-based BPM solution which facilitates a round trip from insight to action as outlines. As major software vendors have begun developing solutions (BPM/BPA solutions), this paper will stimulate a debate between research and practice on suitable design and technology.",
"title": ""
},
{
"docid": "bff0d845fa6281b13da96118bdfbeaeb",
"text": "A hash tag is defined to be a word or phrase prefixed with the symbol \"#\". It is widely used in current social media sites including Twitter and Google+, and serves as a significant meta tag to categorize users' messages, to propagate ideas and topic trends. The use of hash tags has become an integral part of the social media culture. However, the free-form nature and the varied contexts of hash tags bring challenges: how to understand hash tags and discover their relationships? In this paper, we propose Tag-Latent Dirichlet Allocation (TLDA), a new topic modeling approach to bridge hash tags and topics. TLDA extends Latent Dirichlet Allocation by incorporating the observed hash tags in the generative process. In TLDA, a hash tag is mapped into the form of a mixture of shared topics. This representation further enables the analysis of the relationships between the hash tags. Applying our model to tweet data, we first illustrate the ability of our approach to explain hard-to-understand hash tags with topics. We also demonstrate that our approach enables users to further analyze the relationships between the hash tags.",
"title": ""
},
{
"docid": "c797e42772802ee9924a970593e5c81e",
"text": "Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Recently, work on process mining showed how management of these processes, and engineering of supporting systems, can be guided by models extracted from the event logs that are recorded during process operation. In this work, we establish a queueing perspective in operational process mining. We propose to consider queues as first-class citizens and use queueing theory as a basis for queue mining techniques. To demonstrate the value of queue mining, we revisit the specific operational problem of online delay prediction: using event data, we show that queue mining yields accurate online predictions of case delay.",
"title": ""
},
{
"docid": "4b0cf6392d84a0cc8ab80c6ed4796853",
"text": "This paper introduces the Finite-State TurnTaking Machine (FSTTM), a new model to control the turn-taking behavior of conversational agents. Based on a non-deterministic finite-state machine, the FSTTM uses a cost matrix and decision theoretic principles to select a turn-taking action at any time. We show how the model can be applied to the problem of end-of-turn detection. Evaluation results on a deployed spoken dialog system show that the FSTTM provides significantly higher responsiveness than previous approaches.",
"title": ""
},
{
"docid": "1ee352ff083da1f307674414a5640d64",
"text": "The present article examines personality as a predictor of college achievement beyond the traditional predictors of high school grades (HSGPA) and SAT scores. In an undergraduate sample (N=131), self and informant-rated conscientiousness using the Big Five Inventory (BFI; John, Donahue & Kentle, 1991) robustly correlated with academic achievement as indexed by both freshman GPA and senior GPA. A model including traditional predictors and informant ratings of conscientiousness accounted for 18% of the variance in freshman GPA and 37% of the variance in senior GPA; conscientiousness alone explained unique variance in senior GPA beyond the traditional predictors, even when freshman GPA was included in the model. Conscientiousness is a valid and unique predictor of college performance, and informant ratings may be useful in its assessment for this purpose. Acquaintance reports 3 Acquaintance reports of personality and academic achievement: A case for conscientiousness The question of what makes a good student “good” lies at the core of a socially-relevant discussion of college admissions criteria. While past research has shown personality variables to be related to school performance (e.g. Costa & McCrae, 1992; De Raad, 1996), academic achievement is still widely assumed to be more a function of intellectual ability than personality. The purpose of this study is to address two ambiguities that trouble past research in this area: the choice of conceptually appropriate outcome measures and the overuse of self-report data. A highly influential meta-analysis by Barrick and Mount (1991) concluded that conscientiousness is a robust and valid predictor of job performance across all criteria and occupations. Soon after, Costa and McCrae (1992) provided evidence that conscientiousness is likewise related to academic performance. This finding has been replicated by others (recently, Chamorro-Premuzic & Farnham, 2003a and 2003b). Moreover, conscientiousness appears to be free of some of the unwanted complications associated with ability as assessed by the SAT: Hogan and Hogan (1995) reported that personality inventories generally do not systematically discriminate against any ethnic or national group, and thus may offer more equitable bases for selection (see also John, et al., 1991). Still, skepticism remains. Farsides and Woodfield (2003) called the relationship between personality variables and academic performance in previous literature “erratic and, where present, modest” (p. 1229). Green, Peters and Webster (1991) found academic success only weakly associated with personality factors; Rothstien, Paunonen, Rush and King (1994) found that the Big Five factors failed to significantly predict academic performance criteria among a sample of MBA students; Allik and Realo (1997) and Diseth (2003) found most of the valid variance in achievement to be unrelated to personality. Acquaintance reports 4 The current study seeks to address two pervasive obstructions to conceptual clarity in the previous literature: 1) Lack of consistency in the measurement of “academic achievement.” Past studies have used individual exam grades, final grades in a single course, semester GPA, year-end GPA, GPA at the time of the study, or variables such as attendance or participation. The present study uses concrete and consequential outcomes: freshman cumulative GPA (fGPA; the measure most commonly employed in previous research) and senior cumulative GPA (sGPA; a final, more comprehensive measure of college success.). 2) Near-exclusive use of self-report personality measures. Reliance on self-reports can be problematic because what one believes of oneself may or may not be an accurate or complete assessment of one’s true strengths and weaknesses. Thus, the present research utilizes ratings of personality provided by the self and by informants. As the personality inventories used in these analyses were administered up to three years prior to the measurement of the dependent variable, finding a meaningful relationship between the two will provide evidence that one’s traits – evaluated by someone else and a number of years in the past! – are consistent enough to serve as useful predictors of a real and important outcome. Within the confines of these parameters, and based upon previous literature, it is hypothesized that: conscientiousness will fail to show the mean differences in ethnicity problematic of SAT scores; both selfand informant-rated conscientiousness will be positively and significantly related to both outcome measures; and finally, conscientiousness will be capable of explaining incremental variance in both outcome measures beyond what is accounted for by the traditional predictors. Method Acquaintance reports 5 Participants This study examined the predictors of academic achievement in a sample of 131 target participants (54.2% female, 45.8% male), who were those among an original sample of 217 undergraduates with sufficiently complete data for the present analyses, as described below. Due to the “minority majority” status of the UCR campus population, the diverse sample included substantial proportions of Asians or Asian Americans (43.5%), Hispanics or Latin Americans (19.8%), Caucasians (16.0%), African Americans (12.9%), and students of other ethnic descent (7.6%). The study also includes 258 informants who described participants with whom they were acquainted. Each target participant and informant was paid $10 per hour. The larger data set was originally designed to explore issues of accuracy in personality judgment. Other analyses, completed and planned (see Letzring, Block & Funder, 2004; Letzring, Wells & Funder, in press; Vazire & Funder, 2006), address different topics and do not overlap with those in the current study. Targets & Informants To deal with missing data, all participants in the larger sample who were lacking any one of the predictor variables (SAT score, HSGPA, or either selfor informant-rated Conscientiousness) were dropped (reducing the N from 217 to 153 at this stage of selection). Among the remaining participants, 21 were missing sGPA (i.e., had not yet graduated at the time the GPAs were collected from the University) but had a junior-level GPA; for these, a regression using junior GPA to predict sGPA was performed (r = 0.96 between the two) and the resulting score was imputed. 22 participants had neither sGPA nor a junior GPA; these last were dropped, leaving the final N = 131 for target participants. Means and standard deviations for both the Acquaintance reports 6 dependent and predictor variables in this smaller sample were comparable to those of the larger group from which they were drawn. Each participant provided contact information for two people who knew him or her best and would be willing to provide information about him or her. 127 participants in our target sample recruited the requested 2 informants, while 4 participants recruited only 1, for a total of 258 informants. Measures Traditional Predictors Participants completed a release form granting access to their academic records; HSGPA and SAT scores were later obtained from the UCR Registrar’s Office. The Registrar provided either an SAT score or an SAT score converted from an American College Testing (ACT) score. Only the total score (rather than the separate verbal/quantitative sub-scores) was used. Personality In order to assess traits at a global level, participants provided self-reports and informants provided peer ratings using the Big Five Inventory (BFI; John, Donahue & Kentle, 1991), which assesses extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience. BFI-scale reliabilities and other psychometric properties have been shown to be similar to those of the much longer scales of Costa and McCrae’s (1990) NEO-FFI (John, et al. 1991). Where two informants were available (all but 4 cases), a composite of their ratings was created by averaging the conscientiousness scale scores. Reliability of the averaged informants’ conscientiousness rating was .59. Academic performance Acquaintance reports 7 Cumulative fGPA and sGPA were collected from the campus Registrar. While the data collection phase of the original, larger project began a few years before the analyses completed for this study and all of the participants had progressed in their academic standing, not all of them had yet completed their senior year. Participants missing GPA data were handled as described above. Results Analyses examined mean differences among ethnic groups and correlations between each of the potential predictors and the two outcome measures. A final set of analyses entered the predictor variables into hierarchical regressions predicting GPA. Descriptive Statistics Mean differences by ethnicity in HSGPA, SAT scores, and BFI scores were examined with one-way ANOVAs (see Table 1). Members of the different groups were admitted to UCR with approximately the same incoming HSGPA (M = 3.51) and very little variation (SD = 0.37), F(4, 126) = 0.68, p = 0.609. There was, however, a significant difference between ethnicities in their entering SAT scores, F(4, 126) = 5.56, p = 3.7 x 10, with Caucasians the highest and African Americans the lowest. As predicted, there were no significant differences in conscientiousness across ethnicities. Correlations There were no significant correlations between gender and any of the variables included in this study. HSGPA and SAT scores – the two traditional predictors – are only modestly related in this sample: r(131) = 0.12, n.s., indicating that they are independently capable of explaining variance in college GPA. sGPA, containing all the variance of fGPA, is thus well correlated with it, r(131) = 0.68, p < .05. Correlations between academic performance and the Acquaintance reports 8 hypothesized predictors of performance (HSGPA, SAT scores, and conscientiousness) are presented in Table 2. While the traditional ",
"title": ""
},
{
"docid": "fae55cf048de769f7b57c3a02cc02f8e",
"text": "Ranking fraud in the mobile App market refers to fraudulent or deceptive activities which have a purpose of bumping up the Apps in the popularity list. Indeed, it becomes more and more frequent for App developers to use shady means, such as inflating their Apps' sales or posting phony App ratings, to commit ranking fraud. While the importance of preventing ranking fraud has been widely recognized, there is limited understanding and research in this area. To this end, in this paper, we provide a holistic view of ranking fraud and propose a ranking fraud detection system for mobile Apps. Specifically, we first propose to accurately locate the ranking fraud by mining the active periods, namely leading sessions, of mobile Apps. Such leading sessions can be leveraged for detecting the local anomaly instead of globalanomaly of App rankings. Furthermore, we investigate three types of evidences, i.e., ranking based evidences, rating based evidences and review based evidences, by modeling Apps' ranking, rating and review behaviors through statistical hypotheses tests. In addition, we propose an optimization based aggregation method to integrate all the evidences for fraud detection. Finally, we evaluate the proposed system with real-world App data collected from the iOS App Store for a long time period. In the experiments, we validate the effectiveness of the proposed system, and show the scalability of the detection algorithm as well as some regularity of ranking fraud activities.",
"title": ""
},
{
"docid": "6b32dcdd20733e366ff21b686cbd76b6",
"text": "We present a new approach to extraction of hypernyms based on projection learning and word embeddings. In contrast to classification-based approaches, projection-based methods require no candidate hyponym-hypernym pairs. While it is natural to use both positive and negative training examples in supervised relation extraction, the impact of negative examples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for regularization of the model significantly improve performance compared to the stateof-the-art approach of Fu et al. (2014) on three datasets from different languages.",
"title": ""
},
{
"docid": "022487e0851506af02cc2a73675b3532",
"text": "Communicators tend to share more stereotype-consistent than stereotype-inconsistent information. The authors propose and test a situated functional model of this stereotype consistency bias: stereotype-consistent and inconsistent information differentially serve 2 central functions of communication--sharing information and regulating relationships; depending on the communication context, information seen to serve these different functions better is more likely communicated. Results showed that stereotype-consistent information is perceived as more socially connective but less informative than inconsistent information, and when the stereotype is perceived to be highly shared in the community, more stereotype-consistent than inconsistent information is communicated due to its greater social connectivity function. These results highlight the need to examine communication as a dynamic and situated social activity.",
"title": ""
},
{
"docid": "83452d8424d97b1c1f5826d32b8ccbaa",
"text": "Creating meaning from a wide variety of available information and being able to choose what to learn are highly relevant skills for learning in a connectivist setting. In this work, various approaches have been utilized to gain insights into learning processes occurring within a network of learners and understand the factors that shape learners' interests and the topics to which learners devote a significant attention. This study combines different methods to develop a scalable analytic approach for a comprehensive analysis of learners' discourse in a connectivist massive open online course (cMOOC). By linking techniques for semantic annotation and graph analysis with a qualitative analysis of learner-generated discourse, we examined how social media platforms (blogs, Twitter, and Facebook) and course recommendations influence content creation and topics discussed within a cMOOC. Our findings indicate that learners tend to focus on several prominent topics that emerge very quickly in the course. They maintain that focus, with some exceptions, throughout the course, regardless of readings suggested by the instructor. Moreover, the topics discussed across different social media differ, which can likely be attributed to the affordances of different media. Finally, our results indicate a relatively low level of cohesion in the topics discussed which might be an indicator of a diversity of the conceptual coverage discussed by the course participants.",
"title": ""
},
{
"docid": "eb2bb3518ddc95b1920c70f570fb6aa8",
"text": "The scale of the Software-Defined Network (SDN) Controller design problem has become apparent with the expansion of SDN deployments. Initial SDN deployments were small-scale, single controller environments for research and usecase testing. Today, enterprise deployments requiring multiple controllers are gathering momentum e.g. Google's backbone network, Microsoft's public cloud, and NTT's edge gateway. Third-party applications are also becoming available e.g. HP SDN App Store. The increase in components and interfaces for the evolved SDN implementation increases the security challenges of the SDN controller design. In this work, the requirements of a secure, robust, and resilient SDN controller are identified, stateof- the-art open-source SDN controllers are analyzed with respect to the security of their design, and recommendations for security improvements are provided. This contribution highlights the gap between the potential security solutions for SDN controllers and the actual security level of current controller designs.",
"title": ""
},
{
"docid": "8be957572c846ddda107d8343094401b",
"text": "Corporate accounting statements provide financial markets, and tax services with valuable data on the economic health of companies, although financial indices are only focused on a very limited part of the activity within the company. Useful tools in the field of processing extended financial and accounting data are the methods of Artificial Intelligence, aiming the efficient delivery of financial information to tax services, investors, and financial markets where lucrative portfolios can be created. Key-words: Financial Indices, Artificial Intelligence, Data Mining, Neural Networks, Genetic Algorithms",
"title": ""
},
{
"docid": "76d4f6537d8c9f9e1f213f53ce97dd48",
"text": "SAFE is a large-scale, clean-slate co-design project encompassing hardware architecture, programming languages, and operating systems. Funded by DARPA, the goal of SAFE is to create a secure computing system from the ground up. SAFE hardware provides memory safety, dynamic type checking, and native support for dynamic information flow control. The Breeze programming language leverages the security features of the underlying machine, and the “zero kernel” operating system avoids relying on any single privileged component for overall system security. The SAFE project is working towards formally verifying security properties of the runtime software. The SAFE system sets a new high-water mark for system security, allowing secure applications to be built on a solid foundation rather than on the inherently vulnerable conventional platforms available today.",
"title": ""
},
{
"docid": "e97201f22acbd963cdffb29f95718f92",
"text": "Nowadays basic algorithms such as Apriori and Eclat often are conceived as mere textbook examples without much practical applicability: in practice more sophisticated algorithms with better performance have to be used. We would like to challenge that point of view by showing that a carefully assembled implementation of Eclat outperforms the best algorithms known in the field, at least for dense datasets. For that we view Eclat as a basic algorithm and a bundle of optional algorithmic features that are taken partly from other algorithms like lcm and Apriori, partly new ones. We evaluate the performance impact of these different features and report about results of experiments that support our claim of the competitiveness of Eclat.",
"title": ""
},
{
"docid": "0ec8f9610a7f02b311396a18ea55eaed",
"text": "Mental disorders are highly prevalent and cause considerable suffering and disease burden. To compound this public health problem, many individuals with psychiatric disorders remain untreated although effective treatments exist. We examine the extent of this treatment gap. We reviewed community-based psychiatric epidemiology studies that used standardized diagnostic instruments and included data on the percentage of individuals receiving care for schizophrenia and other non-affective psychotic disorders, major depression, dysthymia, bipolar disorder, generalized anxiety disorder (GAD), panic disorder, obsessive-compulsive disorder (OCD), and alcohol abuse or dependence. The median rates of untreated cases of these disorders were calculated across the studies. Examples of the estimation of the treatment gap for WHO regions are also presented. Thirty-seven studies had information on service utilization. The median treatment gap for schizophrenia, including other non-affective psychosis, was 32.2%. For other disorders the gap was: depression, 56.3%; dysthymia, 56.0%; bipolar disorder, 50.2%; panic disorder, 55.9%; GAD, 57.5%; and OCD, 57.3%. Alcohol abuse and dependence had the widest treatment gap at 78.1%. The treatment gap for mental disorders is universally large, though it varies across regions. It is likely that the gap reported here is an underestimate due to the unavailability of community-based data from developing countries where services are scarcer. To address this major public health challenge, WHO has adopted in 2002 a global action programme that has been endorsed by the Member States.",
"title": ""
}
] |
scidocsrr
|
0c07540354423bcd11655557a4e99ef6
|
Weighted Convolutional Neural Network Ensemble
|
[
{
"docid": "0a3f5ff37c49840ec8e59cbc56d31be2",
"text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.",
"title": ""
}
] |
[
{
"docid": "7cfdad39cebb90cac18a8f9ae6a46238",
"text": "A malware macro (also called \"macro virus\") is the code that exploits the macro functionality of office documents (especially Microsoft Office’s Excel and Word) to carry out malicious action against the systems of the victims that open the file. This type of malware was very popular during the late 90s and early 2000s. After its rise when it was created as a propagation method of other malware in 2014, macro viruses continue posing a threat to the user that is far from being controlled. This paper studies the possibility of improving macro malware detection via machine learning techniques applied to the properties of the code.",
"title": ""
},
{
"docid": "179299ec6ebad6bc0a778b002e36b8ee",
"text": "A steady plant monitoring is necessary to control the spread of a disease but its cost may be high and as a result, the producers often skip critical preventive procedures to keep the production cost low. Although, official disease recognition is a responsibility of professional agriculturists, low cost observation and computational assisted diagnosis can effectively help in the recognition of a plant disease in its early stages. The most important symptoms of a disease such as lesions in the leaves, fruits, stems, etc, are visible. The features (color, area, number of spots) of these lesions can form significant decision criteria supplemented by other more expensive molecular analyses and tests that can follow. An image processing technique capable of recognizing the plant lesion features is described in this paper. The low complexity of this technique can allow its implementation on mobile phones. The achieved accuracy is higher than 90% according to the experimental results.",
"title": ""
},
{
"docid": "a98631b46893645a94a83995836dc71d",
"text": "This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.",
"title": ""
},
{
"docid": "dc5f111bfe7fa27ae7e9a4a5ba897b51",
"text": "We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net.",
"title": ""
},
{
"docid": "79c1752ab2baa97cbb9127480732a57d",
"text": "This paper studies the cooperative learning of two generative models. Both models are parametrized by ConvNets. The first model is a deep energy-based model, whose energy function is defined by a bottom-up ConvNet, which maps the observed image to the energy. We call it the descriptor network. The second model is a generator network, which is defined by a top-down ConvNet, which maps the latent factors to the observed image. The maximum likelihood learning algorithms of both models involve MCMC sampling such as Langevin dynamics. We observe that the two learning algorithms can be seamlessly interwoven into a cooperative learning algorithm that can train both models simultaneously. Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthetic examples to initialize a finite-step MCMC that samples and trains the energy-based descriptor model. After that, the generator model learns from how the MCMC changes its synthetic examples. That is, the descriptor model teaches the generator model by MCMC, so that the generator model accumulates the MCMC transitions and reproduces them by direct ancestral sampling. We call this scheme MCMC teaching. We show that the cooperative algorithm can learn highly realistic generative models.",
"title": ""
},
{
"docid": "a0c42d2b0ffd4a784c016663dfb6bb4e",
"text": "College of Information and Electrical Engineering, China Agricultural University, Beijing, China Abstract. This paper presents a system framework taking the advantages of the WSN for the real-time monitoring on the water quality in aquaculture. We design the structure of the wireless sensor network to collect and continuously transmit data to the monitoring software. Then we accomplish the configuration model in the software that enhances the reuse and facility of the monitoring project. Moreover, the monitoring software developed to represent the monitoring hardware and data visualization, and analyze the data with expert knowledge to implement the auto control. The monitoring system has been realization of the digital, intelligent, and effectively ensures the quality of aquaculture water. Practical deployment results are to show the system reliability and real-time characteristics, and to display good effect on environmental monitoring of water quality.",
"title": ""
},
{
"docid": "38cc554de2f76b7c3c76347f6b96bae0",
"text": "In bipolar disorder, episodes of depression and mania are associated with dramatic disturbances in sleep, which experiments show are likely to contribute to the pathogenesis of the episodes. A recent finding that 18 patients’ manic-depressive cycles oscillated in synchrony with biweekly surges in amplitude of the moon’s tides provided a clue to the cause of the sleep-disturbances. Analyses of one of the patients’ sleep–wake cycles suggest that his mood cycles arose when a circadian rhythm that normally is entrained to dawn and controls the daily onset of wakefulness became entrained instead to 24.8-h recurrences of every second 12.4-h tidal cycle. The finding provides the basis for a comprehensive description of the pathogenesis and pathophysiology of the mood cycle.",
"title": ""
},
{
"docid": "e70e734290d1060c338ab6369ebf3e43",
"text": "One of the major goals of computational sequence analysis is to find sequence similarities, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations among the sequences. Since the degree of similarity is usually assessed by the sequence alignment score, it is necessary to know if a score is high enough to indicate a biologically interesting alignment. A powerful approach to defining score cutoffs is based on the evaluation of the statistical significance of alignments. The statistical significance of an alignment score is frequently assessed by its P-value, which is the probability that this score or a higher one can occur simply by chance, given the probabilistic models for the sequences. In this review we discuss the general role of P-value estimation in sequence analysis, and give a description of theoretical methods and computational approaches to the estimation of statistical signifiance for important classes of sequence analysis problems. In particular, we concentrate on the P-value estimation techniques for single sequence studies (both score-based and score-free), global and local pairwise sequence alignments, multiple alignments, sequence-to-profile alignments and alignments built with hidden Markov models. We anticipate that the review will be useful both to researchers professionally working in bioinformatics as well as to biomedical scientists interested in using contemporary methods of DNA and protein sequence analysis.",
"title": ""
},
{
"docid": "1489207c35a613d38a4f9c06816604f0",
"text": "Switching common-mode voltage (CMV) generated by the pulse width modulation (PWM) of the inverter causes common-mode currents, which lead to motor bearing failures and electromagnetic interference problems in multiphase drives. Such switching CMV can be reduced by taking advantage of the switching states of multilevel multiphase inverters that produce zero CMV. Specific space-vector PWM (SVPWM) techniques with CMV elimination, which only use zero CMV states, have been proposed for three-level five-phase drives, and for open-end winding five-, six-, and seven-phase drives, but such methods cannot be extended to a higher number of levels or phases. This paper presents a general (for any number of levels and phases) SVPMW with CMV elimination. The proposed technique can be applied to most multilevel topologies, has low computational complexity and is suitable for low-cost hardware implementations. The new algorithm is implemented in a low-cost field-programmable gate array and it is successfully tested in the laboratory using a five-level five-phase motor drive.",
"title": ""
},
{
"docid": "316e771f85676bdf85dfce1e4ea3eaa8",
"text": "Stream processing is important for continuously transforming and analyzing the deluge of data that has revolutionized our world. Given the diversity of application domains, streaming applications must be both easy to write and performant. Both goals can be accomplished by high-level programming languages. Dedicated language syntax helps express stream programs clearly and concisely, whereas the compiler and runtime system of the language help optimize runtime performance. This paper describes the language runtime for the IBM Streams Processing Language (SPL) used to program the distributed IBM Streams platform. It gives a system overview and explains several language-based optimizations implemented in the SPL runtime: fusion, thread placement, fission, and transport optimizations.",
"title": ""
},
{
"docid": "8adb07a99940383139f0d4ed32f68f7c",
"text": "The gene ASPM (abnormal spindle-like microcephaly associated) is a specific regulator of brain size, and its evolution in the lineage leading to Homo sapiens was driven by strong positive selection. Here, we show that one genetic variant of ASPM in humans arose merely about 5800 years ago and has since swept to high frequency under strong positive selection. These findings, especially the remarkably young age of the positively selected variant, suggest that the human brain is still undergoing rapid adaptive evolution.",
"title": ""
},
{
"docid": "63893d6406c581e9598b00f7ba95a065",
"text": "Security researchers can send vulnerability notifications to take proactive measures in securing systems at scale. However, the factors affecting a notification’s efficacy have not been deeply explored. In this paper, we report on an extensive study of notifying thousands of parties of security issues present within their networks, with an aim of illuminating which fundamental aspects of notifications have the greatest impact on efficacy. The vulnerabilities used to drive our study span a range of protocols and considerations: exposure of industrial control systems; apparent firewall omissions for IPv6-based services; and exploitation of local systems in DDoS amplification attacks. We monitored vulnerable systems for several weeks to determine their rate of remediation. By comparing with experimental controls, we analyze the impact of a number of variables: choice of party to contact (WHOIS abuse contacts versus national CERTs versus US-CERT), message verbosity, hosting an information website linked to in the message, and translating the message into the notified party’s local language. We also assess the outcome of the emailing process itself (bounces, automated replies, human replies, silence) and characterize the sentiments and perspectives expressed in both the human replies and an optional anonymous survey that accompanied our notifications. We find that various notification regimens do result in different outcomes. The best observed process was directly notifying WHOIS contacts with detailed information in the message itself. These notifications had a statistically significant impact on improving remediation, and human replies were largely positive. However, the majority of notified contacts did not take action, and even when they did, remediation was often only partial. Repeat notifications did not further patching. These results are promising but ultimately modest, behooving the security community to more deeply investigate ways to improve the effectiveness of vulnerability notifications.",
"title": ""
},
{
"docid": "2b677a052846d4f52f7b6a1eac94114d",
"text": "This paper presents a unifying view of messagepassing algorithms, as methods to approximate a complex Bayesian network by a simpler network with minimum information divergence. In this view, the difference between mean-field methods and belief propagation is not the amount of structure they model, but only the measure of loss they minimize (‘exclusive’ versus ‘inclusive’ Kullback-Leibler divergence). In each case, message-passing arises by minimizing a localized version of the divergence, local to each factor. By examining these divergence measures, we can intuit the types of solution they prefer (symmetry-breaking, for example) and their suitability for different tasks. Furthermore, by considering a wider variety of divergence measures (such as alpha-divergences), we can achieve different complexity and performance goals.",
"title": ""
},
{
"docid": "6f989e22917aa2f99749701c8509fcca",
"text": "The reflection of an object can be distorted by undulations of the reflector, be it a funhouse mirror or a fluid surface. Painters and photographers have long exploited this effect, for example, in imaging scenery distorted by ripples on a lake. Here, we use this phenomenon to visualize micrometric surface waves generated as a millimetric droplet bounces on the surface of a vibrating fluid bath (Bush 2015b). This system, discovered a decade ago (Couder et al. 2005), is of current interest as a hydrodynamic quantum analog; specifically, the walking droplets exhibit several features reminiscent of quantum particles (Bush 2015a).",
"title": ""
},
{
"docid": "848bccdce2ff40f181c068bd0359bbe5",
"text": "This letter presents the design and results of low-loss discrete dielectric flat reflectarray and lens for E-band. Using two different kinds of feed, 3-D-pyramidal (wideband) horn and 2 × 2 planar microstrip array (narrowband) antenna, the radiation performances of the two collimating structures are investigated. The discrete lens is optimized to cover the frequencies 71-86 GHz (71-76- and 81-86-GHz bands), while the discrete reflectarray is optimized to cover the 71-76-GHz band. The presented designs utilize the principle of perforated dielectric substrate using a square lattice of drilled holes of different radii and can be fabricated using standard printed circuit board (PCB) technology. The discrete lens has 41 × 41 unit cells and thickness of 6.35 mm, while the reflectarray has 40 × 40 unit cells and thickness of 3.24 mm. A good impedance matching ( |S11|<; -10 dB) and peak gain of 34 ±1 dB with maximum aperture efficiency of 44.6% are achieved over 71-86 GHz for the lens case. On the other hand, reflectarray with peak gain of 32 ±1 dB and aperture efficiency of 41.9% are achieved for 71-76-GHz band.",
"title": ""
},
{
"docid": "7842e5c7ad3dc11d9d53b360e4e2691a",
"text": "It is becoming obvious that all cancers have a defe ctiv p53 pathway, either through TP53 mutation or deregulation of the tumor suppressor function of the wild type TP53 . In this study we examined the expression of P53 and Caspase 3 in transperitoneally injected Ehrlich As cite carcinoma cells (EAC) treated with Tetrodotoxin in the liver of adult mice in order to evaluate the po ssible pro apoptotic effect of Tetrodotoxin . Results: Early in the treatment, num erous EAC detected in the large blood vessels & cen tral veins and expressed both of P53 & Caspase 3 in contrast to the late absence of P53 expressing EAC at the 12 th day of Tetrodotoxin treatment. In the same context , predominantly the perivascular hepatocytes expresse d Caspase 3 in contrast to the more diffuse express ion pattern late with Tetrodotoxin treatment. Non of the hepatocytes ever expressed P5 3 neither with early nor late Tetrodotoxin treatmen t. Conclusion: Tetrodotoxin therapy has a proapoptotic effect on Ehrlich Ascites carcin oma Cells (EAC). This may be through enhancing the tumor suppressor function of the wild type TP53 with subsequent Caspase 3 activation .",
"title": ""
},
{
"docid": "23653aa4b64bfece93385bdae48fee4f",
"text": "We offer a systematic analysis of the use of deep learning networks for stock market analysis and prediction. Its ability to extract features from a large set of raw data without relying on prior knowledge of predictors makes deep learning potentially attractive for stock market prediction at high frequencies. Deep learning algorithms vary considerably in the choice of network structure, activation function, and other model parameters, and their performance is known to depend heavily on the method of data representation. Our study attempts to provides a comprehensive and objective assessment of both the advantages and drawbacks of deep learning algorithms for stock market analysis and prediction. Using highfrequency intraday stock returns as input data, we examine the effects of three unsupervised feature extraction methods—principal component analysis, autoencoder, and the restricted Boltzmann machine— on the network’s overall ability to predict future market behavior. Empirical results suggest that deep neural networks can extract additional information from the residuals of the autoregressive model and improve prediction performance; the same cannot be said when the autoregressive model is applied to the residuals of the network. Covariance estimation is also noticeably improved when the predictive network is applied to covariance-based market structure analysis. Our study offers practical insights and potentially useful directions for further investigation into how deep learning networks can be effectively used for stock market analysis and prediction. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d49260a42c4d800963ca8779cf50f1ee",
"text": "Autoencoders learn data representations (codes) in such a way that the input is reproduced at the output of the network. However, it is not always clear what kind of properties of the input data need to be captured by the codes. Kernel machines have experienced great success by operating via inner-products in a theoretically well-defined reproducing kernel Hilbert space, hence capturing topological properties of input data. In this paper, we enhance the autoencoder’s ability to learn effective data representations by aligning inner products between codes with respect to a kernel matrix. By doing so, the proposed kernelized autoencoder allows learning similarity-preserving embeddings of input data, where the notion of similarity is explicitly controlled by the user and encoded in a positive semi-definite kernel matrix. Experiments are performed for evaluating both reconstruction and kernel alignment performance in classification tasks and visualization of high-dimensional data. Additionally, we show that our method is capable to emulate kernel principal component analysis on a denoising task, obtaining competitive results at a much lower computational cost.",
"title": ""
},
{
"docid": "e6c5ca76cd14b398ac82a2f38b0a9b12",
"text": "Modern dairies cause the accumulation of considerable quantity of dairy manure which is a potential hazard to the environment. Dairy manure can also act as a principal larval resource for many insects such as the black soldier fly, Hermetia illucens. The black soldier fly larvae (BSFL) are considered as a new biotechnology to convert dairy manure into biodiesel and sugar. BSFL are a common colonizer of large variety of decomposing organic material in temperate and tropical areas. Adults do not need to be fed, except to take water, and acquired enough nutrition during larval development for reproduction. Dairy manure treated by BSFL is an economical way in animal facilities. Grease could be extracted from BSFL by petroleum ether, and then be treated with a two-step method to produce biodiesel. The digested dairy manure was hydrolyzed into sugar. In this study, approximately 1248.6g fresh dairy manure was converted into 273.4 g dry residue by 1200 BSFL in 21 days. Approximately 15.8 g of biodiesel was gained from 70.8 g dry BSFL, and 96.2g sugar was obtained from the digested dairy manure. The residual dry BSFL after grease extraction can be used as protein feedstuff.",
"title": ""
}
] |
scidocsrr
|
8e57106fef15f6ba447d8ad493f2afc8
|
Exponentially Weighted Moving Average Charts for Detecting Concept Drift
|
[
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "8b63800da2019180d266297647e3dbc0",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
}
] |
[
{
"docid": "1a0b7cb5fc505c110a89c3017d4fd07e",
"text": "We propose a robust fitting framework, called Adaptive Kernel-Scale Weighted Hypotheses (AKSWH), to segment multiple-structure data even in the presence of a large number of outliers. Our framework contains a novel scale estimator called Iterative Kth Ordered Scale Estimator (IKOSE). IKOSE can accurately estimate the scale of inliers for heavily corrupted multiple-structure data and is of interest by itself since it can be used in other robust estimators. In addition to IKOSE, our framework includes several original elements based on the weighting, clustering, and fusing of hypotheses. AKSWH can provide accurate estimates of the number of model instances and the parameters and the scale of each model instance simultaneously. We demonstrate good performance in practical applications such as line fitting, circle fitting, range image segmentation, homography estimation, and two--view-based motion segmentation, using both synthetic data and real images.",
"title": ""
},
{
"docid": "98c64622f9a22f89e3f9dd77c236f310",
"text": "After a development process of many months, the TLS 1.3 specification is nearly complete. To prevent past mistakes, this crucial security protocol must be thoroughly scrutinised prior to deployment. In this work we model and analyse revision 10 of the TLS 1.3 specification using the Tamarin prover, a tool for the automated analysis of security protocols. We specify and analyse the interaction of various handshake modes for an unbounded number of concurrent TLS sessions. We show that revision 10 meets the goals of authenticated key exchange in both the unilateral and mutual authentication cases. We extend our model to incorporate the desired delayed client authentication mechanism, a feature that is likely to be included in the next revision of the specification, and uncover a potential attack in which an adversary is able to successfully impersonate a client during a PSK-resumption handshake. This observation was reported to, and confirmed by, the IETF TLS Working Group. Our work not only provides the first supporting evidence for the security of several complex protocol mode interactions in TLS 1.3, but also shows the strict necessity of recent suggestions to include more information in the protocol's signature contents.",
"title": ""
},
{
"docid": "a27bb5785e61407dc537941a4b839670",
"text": "We have developed a new Linear Support Vector Machine (SVM) training algorithm called OCAS. Its computational effort scales linearly with the sample size. In an extensive empirical evaluation OCAS significantly outperforms current state of the art SVM solvers, like SVMlight, SVMperf and BMRM, achieving speedups of over 1,000 on some datasets over SVMlight and 20 over SVMperf, while obtaining the same precise Support Vector solution. OCAS even in the early optimization steps shows often faster convergence than the so far in this domain prevailing approximative methods SGD and Pegasos. Effectively parallelizing OCAS we were able to train on a dataset of size 15 million examples (itself about 32GB in size) in just 671 seconds --- a competing string kernel SVM required 97,484 seconds to train on 10 million examples sub-sampled from this dataset.",
"title": ""
},
{
"docid": "fc86592d0ea096fdb27cb1bcc3fe28f1",
"text": "This paper presents the design technique, simulation, fabrication and comparison between measured and simulated results of a parallel coupled microstrip BPF. The filter is designed and optimized at 2.44 GHz with a FBW of 3.42%. The first step in designing of this filter is approximated calculation of its lumped component prototype. Admittance inverter is used to transform the lumped component circuit into an equivalent form using microwave structures. After getting the required specifications, the filter structure is realized using parallel coupled technique. Simulation is done using ADS software. Next, optimization is done to achieve low insertion loss and a selective skirt. The simulated filter is fabricated on FR-4 substrate. Comparison between the simulated and measured results shows that they are approximately equal.",
"title": ""
},
{
"docid": "c059d43c51ec35ec7949b0a10d718b6f",
"text": "The problem of signal recovery from its Fourier transform magnitude is of paramount importance in various fields of engineering and has been around for more than 100 years. Due to the absence of phase information, some form of additional information is required in order to be able to uniquely identify the signal of interest. In this paper, we focus our attention on discrete-time sparse signals (of length <inline-formula><tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula>). We first show that if the discrete Fourier transform dimension is greater than or equal to <inline-formula><tex-math notation=\"LaTeX\">$2n$</tex-math></inline-formula>, then almost all signals with <italic> aperiodic</italic> support can be uniquely identified by their Fourier transform magnitude (up to time shift, conjugate flip, and global phase). Then, we develop an efficient two-stage sparse-phase retrieval algorithm (TSPR), which involves: identifying the support, i.e., the locations of the nonzero components, of the signal using a combinatorial algorithm; and identifying the signal values in the support using a convex algorithm. We show that TSPR can <italic> provably</italic> recover most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/2-{\\epsilon }})$</tex-math> </inline-formula>-sparse signals (up to a time shift, conjugate flip, and global phase). We also show that, for most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/4-{\\epsilon }})$</tex-math></inline-formula>-sparse signals, the recovery is <italic>robust</italic> in the presence of measurement noise. These recovery guarantees are asymptotic in nature. Numerical experiments complement our theoretical analysis and verify the effectiveness of TSPR.",
"title": ""
},
{
"docid": "f492a7a6b28dcd2dd0dde45598a85a11",
"text": "We present a detailed study on the rear contact formation of rear-surface-passivated silicon solar cells by full-area screen printing and alloying of aluminum pastes on the locally opened passivation layer. We demonstrate that applying conventional Al pastes exhibits two main problems: (1) high contact depths leading to an enlargement of the contact area and (2) low thicknesses of the Al-doped p+ Si regions in the contact points resulting in poor electron shielding. We show that this inadequate contact formation can be directly linked to the deficiently low percentage of silicon that dissolves into the Al-Si melt during alloying. Thus, by intentionally adding silicon to the Al paste, we could significantly improve the contact geometry by reducing the contact depth and enlarging the Al-p+ thickness in the contact points, enabling a simple industrially feasible way for the rear contact formation of silicon solar cells.",
"title": ""
},
{
"docid": "31865d8e75ee9ea0c9d8c575bbb3eb90",
"text": "Magicians use misdirection to prevent you from realizing the methods used to create a magical effect, thereby allowing you to experience an apparently impossible event. Magicians have acquired much knowledge about misdirection, and have suggested several taxonomies of misdirection. These describe many of the fundamental principles in misdirection, focusing on how misdirection is achieved by magicians. In this article we review the strengths and weaknesses of past taxonomies, and argue that a more natural way of making sense of misdirection is to focus on the perceptual and cognitive mechanisms involved. Our psychologically-based taxonomy has three basic categories, corresponding to the types of psychological mechanisms affected: perception, memory, and reasoning. Each of these categories is then divided into subcategories based on the mechanisms that control these effects. This new taxonomy can help organize magicians' knowledge of misdirection in a meaningful way, and facilitate the dialog between magicians and scientists.",
"title": ""
},
{
"docid": "660e07d4f61efdbecae8fcf125963bc7",
"text": "BACKGROUND\nCancer is set to become a major cause of morbidity and mortality in the coming decades in every region of the world. We aimed to assess the changing patterns of cancer according to varying levels of human development.\n\n\nMETHODS\nWe used four levels (low, medium, high, and very high) of the Human Development Index (HDI), a composite indicator of life expectancy, education, and gross domestic product per head, to highlight cancer-specific patterns in 2008 (on the basis of GLOBOCAN estimates) and trends 1988-2002 (on the basis of the series in Cancer Incidence in Five Continents), and to produce future burden scenario for 2030 according to projected demographic changes alone and trends-based changes for selected cancer sites.\n\n\nFINDINGS\nIn the highest HDI regions in 2008, cancers of the female breast, lung, colorectum, and prostate accounted for half the overall cancer burden, whereas in medium HDI regions, cancers of the oesophagus, stomach, and liver were also common, and together these seven cancers comprised 62% of the total cancer burden in medium to very high HDI areas. In low HDI regions, cervical cancer was more common than both breast cancer and liver cancer. Nine different cancers were the most commonly diagnosed in men across 184 countries, with cancers of the prostate, lung, and liver being the most common. Breast and cervical cancers were the most common in women. In medium HDI and high HDI settings, decreases in cervical and stomach cancer incidence seem to be offset by increases in the incidence of cancers of the female breast, prostate, and colorectum. If the cancer-specific and sex-specific trends estimated in this study continue, we predict an increase in the incidence of all-cancer cases from 12·7 million new cases in 2008 to 22·2 million by 2030.\n\n\nINTERPRETATION\nOur findings suggest that rapid societal and economic transition in many countries means that any reductions in infection-related cancers are offset by an increasing number of new cases that are more associated with reproductive, dietary, and hormonal factors. Targeted interventions can lead to a decrease in the projected increases in cancer burden through effective primary prevention strategies, alongside the implementation of vaccination, early detection, and effective treatment programmes.\n\n\nFUNDING\nNone.",
"title": ""
},
{
"docid": "5c690df3977b078243b9cb61e5e712a6",
"text": "Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.",
"title": ""
},
{
"docid": "05e6bc54f6175e1f9bb296500bc3d9e7",
"text": "This article describes XRel, a novel approach for storage and retrieval of XML documents using relational databases. In this approach, an XML document is decomposed into nodes on the basis of its tree structure and stored in relational tables according to the node type, with path information from the root to each node. XRel enables us to store XML documents using a fixed relational schema without any information about DTDs and also to utilize indices such as the B+-tree and the R-tree supported by database management systems. Thus, XRel does not need any extension of relational databases for storing XML documents. For processing XML queries, we present an algorithm for translating a core subset of XPath expressions into SQL queries. Finally, we demonstrate the effectiveness of this approach through several experiments using actual XML documents.",
"title": ""
},
{
"docid": "effea9e0ceda33fa27bf7da904eb2ed1",
"text": "Switched biasing is proposed as a technique for reducing the 1/f noise in MOSFET's. Conventional techniques, such as chopping or correlated double sampling, reduce the effect of 1/f noise in electronic circuits, whereas the switched biasing technique reduces the 1/f noise itself. Whereas noise reduction techniques generally lead to more power consumption, switched biasing can reduce the power consumption. It exploits an intriguing physical effect: cycling a MOS transistor from strong inversion to accumulation reduces its intrinsic 1/f noise. As the 1/f noise is reduced at its physical roots, high frequency circuits, in which 1/f noise is being upconverted, can also benefit. This is demonstrated by applying switched biasing in a 0.8 /spl mu/m CMOS sawtooth oscillator. By periodically switching off the bias currents, during time intervals that they are not contributing to the circuit operation, a reduction of the 1/f noise induced phase noise by more than 8 dB is achieved, while the power consumption is also reduced by 30%.",
"title": ""
},
{
"docid": "0317b7b1ca5fb8bcd20cac04466c820e",
"text": "Several clinical studies on major depressive disorder (MDD) have shown that blood brain-derived neurotrophic factor (BDNF) - a factor used to index neuroplasticity - is associated with depression response; however, the results are mixed. The purpose of our study was to evaluate whether BDNF levels are correlated with improvement of depression. We performed a systematic review and meta-analysis of the literature, searching Medline, Cochrane Central, SciELO databases and reference lists from retrieved articles for clinical studies comparing mean BDNF blood levels in depressed patients pre- and post-antidepressant treatments or comparing depressed patients with healthy controls. Two reviewers independently searched for eligible studies and extracted outcome data using a structured form previously elaborated. Twenty articles, including 1504 subjects, met our inclusion criteria. The results showed that BDNF levels increased significantly after antidepressant treatment (effect size 0.62, 95% CI 0.36-0.88, random effects model). In addition, there was a significant correlation between changes in BDNF level and depression scores changes (p=0.02). Moreover, the results were robust according to the sensitivity analysis and Begg's funnel plot results did not suggest publication bias. Finally, there was a difference between pre-treatment patients and healthy controls (effect size 0.91, 95% CI 0.70-1.11) and a small but significant difference between treated patients and healthy controls (effect size 0.34, 95% CI 0.02-0.66). Our results show that BDNF levels are associated with clinical changes in depression; supporting the notion that depression improvement is associated with neuroplastic changes.",
"title": ""
},
{
"docid": "e9b78d6f0fd98d5ee27bc08864cdb6a1",
"text": "Mathematical models play a pivotal role in understanding and designing advanced low-power wireless systems. However, the distributed and uncoordinated operation of traditional multi-hop low-power wireless protocols greatly complicates their accurate modeling. This is mainly because these protocols build and maintain substantial network state to cope with the dynamics of low-power wireless links. Recent protocols depart from this design by leveraging synchronous transmissions (ST), whereby multiple nodes simultaneously transmit towards the same receiver, as opposed to pair wise link-based transmissions (LT). ST improve the one-hop packet reliability to an extent that efficient multi-hop protocols with little network state are feasible. This paper studies whether ST also enable simple yet accurate modeling of these protocols. Our contribution to this end is two-fold. First, we show, through experiments on a 139-node test bed, that characterizing packet receptions and losses as a sequence of independent and identically distributed (i.i.d.) Bernoulli trials-a common assumption in protocol modeling but often illegitimate for LT-is largely valid for ST. We then show how this finding simplifies the modeling of a recent ST-based protocol, by deriving (i) sufficient conditions for probabilistic guarantees on the end-to-end packet reliability, and (ii) a Markovian model to estimate the long-term energy consumption. Validation using test bed experiments confirms that our simple models are also highly accurate, for example, the model error in energy against real measurements is 0.25%, a figure never reported before in the related literature.",
"title": ""
},
{
"docid": "30a17bdce5eb936aad1ddf56c285e808",
"text": "Currently, 4G mobile communication systems are supported by the 3GPP standard. In view of the significant increase in mobile data traffic, it is necessary to characterize it to improve the performance of current wireless networks. Indeed, video transmission and video streaming are fundamental assets for the upcoming smart cities and urban environments. Due to the high costs of deploying a real LTE system, emulation systems that consider real operating conditions emerge as a successful alternative. On the other hand, many studies with LTE simulations and emulations do not present information of basic adjustment parameters like the propagation model, nor of validation of the results with real conditions. This paper shows the validation with an ANOVA statistical analysis of an LTE emulation system developed in NS-3 for the live video streaming service. For the validation, different QoS parameters and real conditions have been used. Also, two protocols, namely RTMP and RTSP, have been tested. It is demonstrated that the emulation scenario is appropriate to characterize the traffic that will later allow to carry out a proper performance analysis of the service and technology under study.",
"title": ""
},
{
"docid": "ac6e52c2681565af02af7ee44bd669c7",
"text": "A novel low-temperature polycrystalline-silicon thin-film-transistor pixel circuit for 3D active-matrix organic light-emitting diode (AMOLED) displays is presented in this work. The proposed pixel circuit employs high frame rate (240 Hz) emission driving scheme and only needs 3.5 μs for input data period. Thus, 3D AMOLED displays can be realized under high speed operations. The simulation results demonstrate excellent stability in the proposed pixel circuit. The relative current error rate is only 0.967% under the threshold voltage deviation ( ΔVTH_DTFT = ± 0.33 V) of driving TFT. With an OLED threshold voltage detecting architecture, the OLED current can be increased with the increased OLED threshold voltage to compensate for the OLED luminance degradation. The proposed pixel circuit can therefore effectively compensate for the DTFT threshold voltage shift and OLED electric degradation at the same time.",
"title": ""
},
{
"docid": "30e80cceb7e63f89c6ab0cd20988bedb",
"text": "This work is focused on the development of a new management system for building and home automation that provides a fully real time monitor of household appliances and home environmental parameters. The developed system consists of a smart sensing unit, wireless sensors and actuators and a Web-based interface for remote and mobile applications. The main advantages of the proposed solution rely on the reliability of the developed algorithmics, on modularity and open-system characteristics, on low power consumption and system cost efficiency.",
"title": ""
},
{
"docid": "89e09d83de6b6f1e1c0db9b01c4afbee",
"text": "Speakers are often disfluent, for example, saying \"theee uh candle\" instead of \"the candle.\" Production data show that disfluencies occur more often during references to things that are discourse-new, rather than given. An eyetracking experiment shows that this correlation between disfluency and discourse status affects speech comprehensions. Subjects viewed scenes containing four objects, including two cohort competitors (e.g., camel, candle), and followed spoken instructions to move the objects. The first instruction established one cohort as discourse-given; the other was discourse-new. The second instruction was either fluent or disfluent, and referred to either the given or new cohort. Fluent instructions led to more initial fixations on the given cohort object (replicating Dahan et al., 2002). By contrast, disfluent instructions resulted in more fixations on the new cohort. This shows that discourse-new information can be accessible under some circumstances. More generally, it suggests that disfluency affects core language comprehension processes.",
"title": ""
},
{
"docid": "cd8cad6445b081e020d90eb488838833",
"text": "Heavy metal pollution has become one of the most serious environmental problems today. The treatment of heavy metals is of special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for heavy metal removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat heavy metal wastewater and evaluates these techniques. These technologies include chemical precipitation, ion-exchange, adsorption, membrane filtration, coagulation-flocculation, flotation and electrochemical methods. About 185 published studies (1988-2010) are reviewed in this paper. It is evident from the literature survey articles that ion-exchange, adsorption and membrane filtration are the most frequently studied for the treatment of heavy metal wastewater.",
"title": ""
},
{
"docid": "6d07571fa4a7027a260bd6586d59e2bd",
"text": "As there is a need for innovative and new medical technologies in the healthcare, we identified Thalmic's “MYO Armband”, which is used for gaming systems and controlling applications in mobiles and computers. We can exploit this development in the field of medicine and healthcare to improve public health care system. So, we spotted “MYO diagnostics”, a computer-based application developed by Thalmic labs to understand Electromyography (EMG) lines (graphs), bits of vector data, and electrical signals of our complicated biology inside our arm. The human gestures will allow to gather huge amount of data and series of EMG lines which can be analysed to detect medical abnormalities and hand movements. This application has powerful algorithms which are translated into commands to recognise human hand gestures. The effect of doctors experience on user satisfaction metrics in using MYO armband can be measured in terms of effectiveness, efficiency and satisfaction which are based on the metrics-task completion, error counts, task times and satisfaction scores. In this paper, we considered only satisfaction metrics using a widely used System Usability Scale (SUS) questionnaire model to study the usability on the twenty-four medical students of the Brighton and Sussex Medical School. This helps in providing guidelines about the use of MYO armband for physiotherapy analysis by the doctors and patients. Another questionnaire with a focus on ergonomic (human factors) issues related to the use of the device such as social acceptability, ease of use and ease of learning, comfort and stress, attempted to discover characteristics of hand gestures using MYO. The results of this study can be used in a way to support the development of interactive physiotherapy analysis by individuals using MYO and hand gesture applications at their home for self-examination. Also, the relationship and correlation between the signals received will lead to a better understanding of the whole myocardium system and assist doctors in early diagnosis.",
"title": ""
},
{
"docid": "55741bd4bd57c2cb1a82f5759efd29b5",
"text": "Polarity classification of opinionated sentences with both positive and negative sentiments1 is a key challenge in sentiment analysis. This paper presents a novel unsupervised method for discovering intra-sentence level discourse relations for eliminating polarity ambiguities. Firstly, a discourse scheme with discourse constraints on polarity was defined empirically based on Rhetorical Structure Theory (RST). Then, a small set of cuephrase-based patterns were utilized to collect a large number of discourse instances which were later converted to semantic sequential representations (SSRs). Finally, an unsupervised method was adopted to generate, weigh and filter new SSRs without cue phrases for recognizing discourse relations. Experimental results showed that the proposed methods not only effectively recognized the defined discourse relations but also achieved significant improvement by integrating discourse information in sentence-level polarity classification.",
"title": ""
}
] |
scidocsrr
|
54001d3b43bb1a5ee4990a1dee417a02
|
Poisson image editing
|
[
{
"docid": "b29947243b1ad21b0529a6dd8ef3c529",
"text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.",
"title": ""
}
] |
[
{
"docid": "7512d936d3d170774ad34bac9b8adef3",
"text": "Recently, the concept of Internet of Things (IoT) is attracting much attention due to the huge potential. IoT uses the Internet as a key infrastructure to interconnect numerous geographically diversified IoT nodes which usually have scare resources, and therefore cloud is used as a key back-end supporting infrastructure. In the literature, the collection of the IoT nodes and the cloud is collectively called as an IoT cloud. Unfortunately, the IoT cloud suffers from various drawbacks such as huge network latency as the volume of data which is being processed within the system increases. To alleviate this issue, the concept of fog computing is introduced, in which foglike intermediate computing buffers are located between the IoT nodes and the cloud infrastructure to locally process a significant amount of regional data. Compared to the original IoT cloud, the communication latency as well as the overhead at the backend cloud infrastructure could be significantly reduced in the fog computing supported IoT cloud, which we will refer as IoT fog. Consequently, several valuable services, which were difficult to be delivered by the traditional IoT cloud, can be effectively offered by the IoT fog. In this paper, however, we argue that the adoption of IoT fog introduces several unique security threats. We first discuss the concept of the IoT fog as well as the existing security measures, which might be useful to secure IoT fog. Then, we explore potential threats to IoT fog.",
"title": ""
},
{
"docid": "60db64d440feb7ff3290124c8409d33a",
"text": "The paper is part of a series of background papers which seeks to identify and analyze key constraints in higher education, skills development, and technology absorption in accelerating labor absorption and shared growth in South Africa. The background papers form part of the ‘Closing the Skills and Technology Gaps in South Africa’ project which was financed by the Australian Agency for International Development.",
"title": ""
},
{
"docid": "b19276f4f8f46ee5008166ad02b8e519",
"text": "Generative models with an encoding component such as autoencoders currently receive great interest. However, training of autoencoders is typically complicated by the need to train a separate encoder and decoder model that have to be enforced to be reciprocal to each other. To overcome this problem, by-design reversible neural networks (RevNets) had been previously used as generative models either directly optimizing the likelihood of the data under the model or using an adversarial approach on the generated data. Here, we instead investigate their performance using an adversary on the latent space in the adversarial autoencoder framework. We investigate the generative performance of RevNets on the CelebA dataset, showing that generative RevNets can generate coherent faces with similar quality as Variational Autoencoders. This first attempt to use RevNets inside the adversarial autoencoder framework slightly underperformed relative to recent advanced generative models using an autoencoder component on CelebA, but this gap may diminish with further optimization of the training setup of generative RevNets. In addition to the experiments on CelebA, we show a proofof-principle experiment on the MNIST dataset suggesting that adversary-free trained RevNets can discover meaningful latent dimensions without pre-specifying the number of dimensions of the latent sampling distribution. In summary, this study shows that RevNets can be employed in different generative training settings. Source code for this study is at https://github.com/robintibor/ generative-reversible Translational Neurotechnology Lab, Medical Center University of Freiburg, Germany Machine Learning Lab, University of Freiburg, Germany. Correspondence to: Robin Tibor Schirrmeister <robin.schirrmeister@uniklinik-freiburg.de>. Presented at the ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "391e5f6168e331a26b0b0133f9648603",
"text": "In this paper the development of a 1W DC/DC converter is presented. It is based on a flyback converter for simple applications. The transformer is designed in a form as a spiral coil on a PCB. The special feature of the converter is that the flyback converter is laid out without a core. The area (diameter) of a spiral coil should be bigger as a coil with a core because the air is not a good energy store. The advantage of a coreless transformer is the PCB-Design. The paper describes the theory of the flyback converter and shows a way of the implementation of the coreless planar transformer. An analysis shows some results by different geometry and frequency of the planar transformer.",
"title": ""
},
{
"docid": "c5958b1ef21663b89e3823e9c33dc316",
"text": "The so-called “phishing” attacks are one of the important threats to individuals and corporations in today’s Internet. Combatting phishing is thus a top-priority, and has been the focus of much work, both on the academic and on the industry sides. In this paper, we look at this problem from a new angle. We have monitored a total of 19,066 phishing attacks over a period of ten months and found that over 90% of these attacks were actually replicas or variations of other attacks in the database. This provides several opportunities and insights for the fight against phishing: first, quickly and efficiently detecting replicas is a very effective prevention tool. We detail one such tool in this paper. Second, the widely held belief that phishing attacks are dealt with promptly is but an illusion. We have recorded numerous attacks that stay active throughout our observation period. This shows that the current prevention techniques are ineffective and need to be overhauled. We provide some suggestions in this direction. Third, our observation give a new perspective into the modus operandi of attackers. In particular, some of our observations suggest that a small group of attackers could be behind a large part of the current attacks. Taking down that group could potentially have a large impact on the phishing attacks observed today.",
"title": ""
},
{
"docid": "dbb7520f2f88005b70e0793c74b7b296",
"text": "Spoken language understanding and dialog management have emerged as key technologies in interacting with personal digital assistants (PDAs). The coverage, complexity, and the scale of PDAs are much larger than previous conversational understanding systems. As such, new problems arise. In this paper, we provide an overview of the language understanding and dialog management capabilities of PDAs, focusing particularly on Cortana, Microsoft's PDA. We explain the system architecture for language understanding and dialog management for our PDA, indicate how it differs with prior state-of-the-art systems, and describe key components. We also report a set of experiments detailing system performance on a variety of scenarios and tasks. We describe how the quality of user experiences are measured end-to-end and also discuss open issues.",
"title": ""
},
{
"docid": "f63d6cd35ac9ea46ee2a162fe8f68efa",
"text": "In the fashion industry, order scheduling focuses on the assignment of production orders to appropriate product ion lines. In reality, before a new order can be put into production, a series of activities known as pre-production events need t o be completed. In addition, in real production process, owing to various uncertainties, the daily production quantity of each order is not always as expected. In this research, by conside ring the pre-production events and the uncertainties in the dail y production quantity, robust order scheduling problems in the fashion industry are investigated with the aid of a multi-objective evolutionary algorithm (MOEA) called nondominated sorting adaptive differential evolution (NSJADE). The experimental results illustrate that it is of paramount importance to consider pre-production events in order scheduling problems in the f ashion industry. We also unveil that the existence of the uncertain ties in the daily production quantity heavily affects the order scheduling.",
"title": ""
},
{
"docid": "ba3522be00805402629b4fb4a2c21cc4",
"text": "Successful electronic government requires the successful implementation of technology. This book lays out a framework for understanding a system of decision processes that have been shown to be associated with the successful use of technology. Peter Weill and Jeanne Ross are based at the Center for Information Systems Research at MIT’s Sloan School of Management, which has been doing research on the management of information technology since 1974. Understanding how to make decisions about information technology has been a primary focus of the Center for decades. Weill and Ross’ book is based on two primary studies and a number of related projects. The more recent study is a survey of 256 organizations from the Americas, Europe, and Asia Pacific that was led by Peter Weill between 2001 and 2003. This work also included 32 case studies. The second study is a set of 40 case studies developed by Jeanne Ross between 1999 and 2003 that focused on the relationship between information technology (IT) architecture and business strategy. This work identified governance issues associated with IT and organizational change efforts. Three other projects undertaken by Weill, Ross, and others between 1998 and 2001 also contributed to the material described in the book. Most of this work is available through the CISR Web site, http://mitsloan.mit.edu/cisr/rmain.php. Taken together, these studies represent a substantial body of work on which to base the development of a frameBOOK REVIEW",
"title": ""
},
{
"docid": "941cd6b47980ff8539b7124a48f160e5",
"text": "Question Answering for complex questions is often modelled as a graph construction or traversal task, where a solver must build or traverse a graph of facts that answer and explain a given question. This “multi-hop” inference has been shown to be extremely challenging, with few models able to aggregate more than two facts before being overwhelmed by “semantic drift”, or the tendency for long chains of facts to quickly drift off topic. This is a major barrier to current inference models, as even elementary science questions require an average of 4 to 6 facts to answer and explain. In this work we empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three freetext corpora (including study guides and Simple Wikipedia). We demonstrate semantic drift tends to be high and aggregation quality low, at between 0.04% and 3%, and highlight scenarios that maximize the likelihood of meaningfully combining information.",
"title": ""
},
{
"docid": "cebb70761a891fd1bce7402c10e7266c",
"text": "Abstract: A new approach for mobility, providing an alternative to the private passenger car, by offering the same flexibility but with much less nuisances, is emerging, based on fully automated electric vehicles. A fleet of such vehicles might be an important element in a novel individual, door-to-door, transportation system to the city of tomorrow. For fully automated operation, trajectory planning methods that produce smooth trajectories, with low associated accelerations and jerk, for providing passenger ́s comfort, are required. This paper addresses this problem proposing an approach that consists of introducing a velocity planning stage to generate adequate time sequences for usage in the interpolating curve planners. Moreover, the generated speed profile can be merged into the trajectory for usage in trajectory-tracking tasks like it is described in this paper, or it can be used separately (from the generated 2D curve) for usage in pathfollowing tasks. Three trajectory planning methods, aided by the speed profile planning, are analysed from the point of view of passengers' comfort, implementation easiness, and trajectory tracking.",
"title": ""
},
{
"docid": "5291984b7e42ff858a567b56cb6f7949",
"text": "Editor: Ulrike von Luxburg Abstract We study hierarchical clustering schemes under an axiomatic view. We show that within this framework, one can prove a theorem analogous to one of J. Kleinberg (Kleinberg, 2002), in which one obtains an existence and uniqueness theorem instead of a non-existence result. We explore further properties of this unique scheme: stability and convergence are established. We represent dendrograms as ultrametric spaces and use tools from metric geometry, namely the Gromov-Hausdor↵ distance, to quantify the degree to which perturbations in the input metric space a↵ect the result of hierarchical methods.",
"title": ""
},
{
"docid": "f8527ea496666ef875805d376fbd2d5d",
"text": "The rapid development of computer and robotic technologies in the last decade is giving hope to perform earlier and more accurate diagnoses of the Autism Spectrum Disorder (ASD), and more effective, consistent, and cost-conscious treatment. Besides the reduced cost, the main benefit of using technology to facilitate treatment is that stimuli produced during each session of the treatment can be controlled, which not only guarantees consistency across different sessions, but also makes it possible to focus on a single phenomenon, which is difficult even for a trained professional to perform, and deliver the stimuli according to the treatment plan. In this article, we provide a comprehensive review of research on recent technology-facilitated diagnosis and treat of children and adults with ASD. Different from existing reviews on this topic, which predominantly concern clinical issues, we focus on the engineering perspective of autism studies. All technology facilitated systems used for autism studies can be modeled as human machine interactive systems where one or more participants would constitute as the human component, and a computer-based or a robotic-based system would be the machine component. Based on this model, we organize our review with the following questions: (1) What are presented to the participants in the studies and how are the content and delivery methods enabled by technologies? (2) How are the reactions/inputs collected from the participants in response to the stimuli in the studies? (3) Are the experimental procedure and programs presented to participants dynamically adjustable based on the responses from the participants, and if so, how? and (4) How are the programs assessed?",
"title": ""
},
{
"docid": "cd8cad6445b081e020d90eb488838833",
"text": "Heavy metal pollution has become one of the most serious environmental problems today. The treatment of heavy metals is of special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for heavy metal removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat heavy metal wastewater and evaluates these techniques. These technologies include chemical precipitation, ion-exchange, adsorption, membrane filtration, coagulation-flocculation, flotation and electrochemical methods. About 185 published studies (1988-2010) are reviewed in this paper. It is evident from the literature survey articles that ion-exchange, adsorption and membrane filtration are the most frequently studied for the treatment of heavy metal wastewater.",
"title": ""
},
{
"docid": "937de8ba80bd92084f9c2886a28874d1",
"text": "Android security has been a hot spot recently in both academic research and public concerns due to numerous instances of security attacks and privacy leakage on Android platform. Android security has been built upon a permission based mechanism which restricts accesses of third-party Android applications to critical resources on an Android device. Such permission based mechanism is widely criticized for its coarse-grained control of application permissions and difficult management of permissions by developers, marketers, and end-users. In this paper, we investigate the arising issues in Android security, including coarse granularity of permissions, incompetent permission administration, insufficient permission documentation, over-claim of permissions, permission escalation attack, and TOCTOU (Time of Check to Time of Use) attack. We illustrate the relationships among these issues, and investigate the existing countermeasures to address these issues. In particular, we provide a systematic review on the development of these countermeasures, and compare them according to their technical features. Finally, we propose several methods to further mitigate the risk in Android security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d98f60a2a0453954543da840076e388a",
"text": "The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.",
"title": ""
},
{
"docid": "e1fe3c9b60f316c8658a18796245c243",
"text": "The ransomware nightmare is taking over the internet impacting common users, small businesses and large ones. The interest and investment which are pushed into this market each month, tells us a few things about the evolution of both technical and social engineering and what to expect in the short-coming future from them. In this paper we analyze how ransomware programs developed in the last few years and how they were released in certain market segments throughout the deep web via RaaS, exploits or SPAM, while learning from their own mistakes to bring profit to the next level. We will also try to highlight some mistakes that were made, which allowed recovering the encrypted data, along with the ransomware authors preference for specific encryption types, how they got to distribute, the silent agreement between ransomwares, coin-miners and bot-nets and some edge cases of encryption, which may prove to be exploitable in the short-coming future.",
"title": ""
},
{
"docid": "d396f95b96ba06154effb6df6991a092",
"text": "Wireless networks have become the main form of Internet access. Statistics show that the global mobile Internet penetration should exceed 70% until 2019. Wi-Fi is an important player in this change. Founded on IEEE 802.11, this technology has a crucial impact in how we share broadband access both in domestic and corporate networks. However, recent works have indicated performance issues in Wi-Fi networks, mainly when they have been deployed without planning and under high user density. Hence, different collision avoidance techniques and Medium Access Control protocols have been designed in order to improve Wi-Fi performance. Analyzing the collision problem, this work strengthens the claims found in the literature about the low Wi-Fi performance under dense scenarios. Then, in particular, this article overviews the MAC protocols used in the IEEE 802.11 standard and discusses solutions to mitigate collisions. Finally, it contributes presenting future trends in MAC protocols. This assists in foreseeing expected improvements for the next generation of Wi-Fi devices.",
"title": ""
},
{
"docid": "f66f7c514097b468c5f196e60ab05ca6",
"text": "INTRODUCTION\nThis study aimed to assess psychological distress (PD) as scored by the Distress Thermometer (DT) in adult primary brain tumor patients and caregivers (CGs) in a clinic setting and ascertain if any high-risk subgroups for PD exist.\n\n\nMATERIAL AND METHODS\nFrom May 2012 to August 2013, n = 96 patients and n = 32 CG underwent DT screening at diagnosis, and a differing cohort of n = 12 patients and n = 14 CGs at first recurrence. Groups were described by diagnosis (high grade, low grade, and benign) and English versus non English speaking. Those with DT score ≥4 met caseness criteria for referral to psycho-oncology services. One-way ANOVA tests were conducted to test for between-group differences where appropriate.\n\n\nRESULTS\nAt diagnosis and first recurrence, 37.5 and 75.0% (respectively) of patients had DT scores above the cutoff for distress. At diagnosis, 78.1% of CGs met caseness criteria for distress. All CGs at recurrence met distress criterion. Patients with high-grade glioma had significantly higher scores than those with a benign tumor. For patients at diagnosis, non English speaking participants did not report significantly higher DT scores than English speaking participants.\n\n\nDISCUSSION\nPsychological distress is particularly elevated in CGs and in patients with high-grade glioma at diagnosis. Effective PD screening, triage, and referral by skilled care coordinators are vital to enable timely needs assessment, psychological support, and effective intervention.",
"title": ""
},
{
"docid": "235fc12dc2f741dacede5f501b028cd3",
"text": "Self-adaptive software is capable of evaluating and changing its own behavior, whenever the evaluation shows that the software is not accomplishing what it was intended to do, or when better functionality or performance may be possible. The topic of system adaptivity has been widely studied since the mid-60s and, over the past decade, several application areas and technologies relating to self-adaptivity have assumed greater importance. In all these initiatives, software has become the common element that introduces self-adaptability. Thus, the investigation of systematic software engineering approaches is necessary, in order to develop self-adaptive systems that may ideally be applied across multiple domains. The main goal of this study is to review recent progress on self-adaptivity from the standpoint of computer sciences and cybernetics, based on the analysis of state-of-the-art approaches reported in the literature. This review provides an over-arching, integrated view of computer science and software engineering foundations. Moreover, various methods and techniques currently applied in the design of self-adaptive systems are analyzed, as well as some European research initiatives and projects. Finally, the main bottlenecks for the effective application of self-adaptive technology, as well as a set of key research issues on this topic, are precisely identified, in order to overcome current constraints on the effective application of self-adaptivity in its emerging areas of application. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7895810c92a80b6d5fd8b902241d66c9",
"text": "This paper discusses a high-voltage pulse generator for producing corona plasma. The generator consists of three resonant charging circuits, a transmission line transformer, and a triggered spark-gap switch. Voltage pulses in the order of 30–100 kV with a rise time of 10–20 ns, a pulse duration of 100–200 ns, a pulse repetition rate of 1–900 pps, an energy per pulse of 0.5–12 J, and the average power of up to 10 kW have been achieved with total energy conversion efficiency of 80%–90%. Moreover, the system has been used in four industrial demonstrations on volatile organic compounds removal, odor emission control, and biogas conditioning.",
"title": ""
}
] |
scidocsrr
|
7f1bd8fa790ca9f0d89217c03fbb7720
|
Image segmentation based on maximum-likelihood estimation and optimum entropy-distribution (MLE-OED)
|
[
{
"docid": "f8071cfa96286882defc85c46b7ab866",
"text": "A novel method for finding active contours, or snakes as developed by Xu and Prince [1] is presented in this paper. The approach uses a regularization based technique and calculus of variations to find what the authors call a Gradient Vector Field or GVF in binary-values or grayscale images. The GVF is in turn applied to ’pull’ the snake towards the required feature. The approach presented here differs from other snake algorithms in its ability to extend into object concavities and its robust initialization technique. Although their algorithm works better than existing active contour algorithms, it suffers from computational complexity and associated costs in execution, resulting in slow execution time.",
"title": ""
}
] |
[
{
"docid": "dc7fb9e9ef95fa438b242e24517b6d36",
"text": "The representation of candidate solutions and the variation operators are fundamental design choices in an evolutionary algorithm (EA). This paper proposes a novel representation technique and suitable variation operators for the degree-constrained minimum spanning tree problem. For a weighted, undirected graphG(V, E), this problem seeks to identify the shortest spanning tree whose node degrees do not exceed an upper bound d ≥ 2. Within the EA, a candidate spanning tree is simply represented by its set of edges. Special initialization, crossover, and mutation operators are used to generate new, always feasible candidate solutions. In contrast to previous spanning tree representations, the proposed approach provides substantially higher locality and is nevertheless computationally efficient; an offspring is always created in O(|V |) time. In addition, it is shown how problemdependent heuristics can be effectively incorporated into the initialization, crossover, and mutation operators without increasing the time-complexity. Empirical results are presented for hard problem instances with up to 500 vertices. Usually, the new approach identifies solutions superior to those of several other optimization methods within few seconds. The basic ideas of this EA are also applicable to other network optimization tasks.",
"title": ""
},
{
"docid": "c8e5257c2ed0023dc10786a3071c6e6a",
"text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.",
"title": ""
},
{
"docid": "880aa3de3b839739927cbd82b7abcf8a",
"text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.",
"title": ""
},
{
"docid": "5ba3baabc84d02f0039748a4626ace36",
"text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.",
"title": ""
},
{
"docid": "d3b6fcc353382c947cfb0b4a73eda0ef",
"text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.",
"title": ""
},
{
"docid": "bc8780078bef1e7c602e16dcf3ccf0bc",
"text": "In this paper, we deal with the problem of authentication and tamper-proofing of text documents that can be distributed in electronic or printed forms. We advocate the combination of robust text hashing and text data-hiding technologies as an efficient solution to this problem. First, we consider the problem of text data-hiding in the scope of the Gel'fand-Pinsker data-hiding framework. For illustration, two modern text data-hiding methods, namely color index modulation (CIM) and location index modulation (LIM), are explained. Second, we study two approaches to robust text hashing that are well suited for the considered problem. In particular, both approaches are compatible with CIM and LIM. The first approach makes use of optical character recognition (OCR) and a classical cryptographic message authentication code (MAC). The second approach is new and can be used in some scenarios where OCR does not produce consistent results. The experimental work compares both approaches and shows their robustness against typical intentional/unintentional document distortions including electronic format conversion, printing, scanning, [...] VILLAN SEBASTIAN, Renato Fisher, et al. Tamper-proofing of Electronic and Printed Text Documents via Robust Hashing and Data-Hiding. In: Proceedings of SPIE-IS&T Electronic Imaging 2007, Security, Steganography, and Watermarking of Multimedia",
"title": ""
},
{
"docid": "ef15ffc5609653488c68364d2ba77149",
"text": "BACKGROUND\nBeneficial effects of probiotics have never been analyzed in an animal shelter.\n\n\nHYPOTHESIS\nDogs and cats housed in an animal shelter and administered a probiotic are less likely to have diarrhea of ≥2 days duration than untreated controls.\n\n\nANIMALS\nTwo hundred and seventeen cats and 182 dogs.\n\n\nMETHODS\nDouble blinded and placebo controlled. Shelter dogs and cats were housed in 2 separate rooms for each species. For 4 weeks, animals in 1 room for each species was fed Enterococcus faecium SF68 while animals in the other room were fed a placebo. After a 1-week washout period, the treatments by room were switched and the study continued an additional 4 weeks. A standardized fecal score system was applied to feces from each animal every day by a blinded individual. Feces of animals with and without diarrhea were evaluated for enteric parasites. Data were analyzed by a generalized linear mixed model using a binomial distribution with treatment being a fixed effect and the room being a random effect.\n\n\nRESULTS\nThe percentage of cats with diarrhea ≥2 days was significantly lower (P = .0297) in the probiotic group (7.4%) when compared with the placebo group (20.7%). Statistical differences between groups of dogs were not detected but diarrhea was uncommon in both groups of dogs during the study.\n\n\nCONCLUSION AND CLINICAL IMPORTANCE\nCats fed SF68 had fewer episodes of diarrhea of ≥2 days when compared with controls suggests the probiotic may have beneficial effects on the gastrointestinal tract.",
"title": ""
},
{
"docid": "a5b71d7162abd4408e2ec821302c0431",
"text": "The Army Digital Array Radar (DAR) project's goal is to demonstrate how wide-bandgap semiconductor technology, highly-integrated transceivers, and the ever-increasing capabilities of commercial digital components can be leveraged to provide new capabilities and enhanced performance in future low-cost phased array systems. A 16-element, S-band subarray has been developed with panel-integrated, plastic-packaged gallium-nitride (GaN) amplifiers, multi-channel transceiver ICs, and digitization at the element level. In addition to full digital beamforming on transmit and receive, the DAR subarray has demonstrated efficient RF power generation exceeding 25 Watts per element, in-situ, element-level calibration monitoring and self-correction capabilities, simultaneous transmit and receive operation through subarray partitioning for an indoor target tracker, and more. An overview is given of these results and capabilities.",
"title": ""
},
{
"docid": "6655a8886137b73e6ed81296871c34f9",
"text": "This research significantly achieved the construction of a teaching evaluation sentiment lexicon and an automated sentiment orientation polarity definition in teaching evaluation. The Teaching Senti-lexicon will compute the weights of terms and phrases obtained from student opinions, which are stored in teaching evaluation suggestions in the form of open-ended questions. This Teaching Senti-lexicon consists of three main attributes, including: teaching corpus, category and sentiment weight score. The sentiment orientation polarity was computed with its mean function being sentiment class definitions. A number of 175 instances were randomised using teaching feedback responses which were posted by students studying at Loei Raja hat University. The contributions of this paper propose an effective teaching sentiment analysis method, especially for teaching evaluation. In this paper, the experimented model employed SVM, ID3 and Naïve Bayes algorithms, which were implemented in order to analyse sentiment classifications with a 97% highest accuracy of SVM. This model is also applied to improve upon their teaching as well.",
"title": ""
},
{
"docid": "e579e6761bc7fa50e76d0141fe848892",
"text": "Vehicular Ad-hoc Network (VANET) is an infrastructure less network. It provides enhancement in safety related techniques and comfort while driving. It enables vehicles to share information regarding safety and traffic analysis. The scope of VANET application has increased with the recent advances in technology and development of smart cities across the world. VANET provide a self aware system that has major impact in enhancement of traffic services and in reducing road accidents. Information shared in this system is time sensitive and requires robust and quick forming network connections. VANET, being a wireless ad hoc network, serves this purpose completely but is prone to security attacks. Highly dynamic connections, sensitive information sharing and time sensitivity of this network, make it an eye-catching field for attackers. This paper represents a literature survey on VANET with primary concern of the security issues and challenges with it. Features of VANET, architecture, security requisites, attacker type and possible attacks in VANET are considered in this survey paper.",
"title": ""
},
{
"docid": "fef105b33a85f76f24c468c58a7534a0",
"text": "An aging population in the United States presents important challenges for patients and physicians. The presence of inflammation can contribute to an accelerated aging process, the increasing presence of comorbidities, oxidative stress, and an increased prevalence of chronic pain. As patient-centered care is embracing a multimodal, integrative approach to the management of disease, patients and physicians are increasingly looking to the potential contribution of natural products. Camu camu, a well-researched and innovative natural product, has the potential to contribute, possibly substantially, to this management paradigm. The key issue is to raise camu camu's visibility through increased emphasis on its robust evidentiary base and its various formulations, as well as making consumers, patients, and physicians more aware of its potential. A program to increase the visibility of camu camu can contribute substantially not only to the management of inflammatory conditions and its positive contribution to overall good health but also to its potential role in many disease states.",
"title": ""
},
{
"docid": "a999b1a89ea72a9e2d4c3b14609acec9",
"text": "During the past few years, evidence of mass independent fractionation (MIF) for mercury (Hg) isotopes have been reported in the Earth's surface reservoirs, mainly assumed to be formed during photochemical processes. However, the magnitude of Hg-MIF in interior pools of the crust is largely unknown. Here, we reported significant variation in Hg-MIF signature (Δ(199)Hg: -0.24 ~ + 0.18‰) in sphalerites collected from 102 zinc (Zn) deposits in China, indicating that Hg-MIF can be recorded into the Earth's crust during geological recycling of crustal material. Changing magnitudes of Hg-MIF signals were observed in Zn deposits with different formations, evidence that Hg isotopes (especially Hg-MIF) can be a useful tracer to identify sources (syngenetic and epigenetic) of Hg in mineral deposits. The average isotopic composition in studied sphalerites (δ(202)Hg average: -0.58‰; Δ(199)Hg average: +0.03‰) may be used to fingerprint Zn smelting activities, one of the largest global Hg emission sources.",
"title": ""
},
{
"docid": "3b97d25d0a0e07d4b4fccc64ff251cce",
"text": "Consider a centralized hierarchical cloud-based multimedia system (CMS) consisting of a resource manager, cluster heads, and server clusters, in which the resource manager assigns clients' requests for multimedia service tasks to server clusters according to the task characteristics, and then each cluster head distributes the assigned task to the servers within its server cluster. For such a complicated CMS, however, it is a research challenge to design an effective load balancing algorithm that spreads the multimedia service task load on servers with the minimal cost for transmitting multimedia data between server clusters and clients, while the maximal load limit of each server cluster is not violated. Unlike previous work, this paper takes into account a more practical dynamic multiservice scenario in which each server cluster only handles a specific type of multimedia task, and each client requests a different type of multimedia service at a different time. Such a scenario can be modelled as an integer linear programming problem, which is computationally intractable in general. As a consequence, this paper further solves the problem by an efficient genetic algorithm with an immigrant scheme, which has been shown to be suitable for dynamic problems. Simulation results demonstrate that the proposed genetic algorithm can efficiently cope with dynamic multiservice load balancing in CMS.",
"title": ""
},
{
"docid": "3f77b59dc39102eb18e31dbda0578ecb",
"text": "GaN high electron mobility transistors (HEMTs) are well suited for high-frequency operation due to their lower on resistance and device capacitance compared with traditional silicon devices. When grown on silicon carbide, GaN HEMTs can also achieve very high power density due to the enhanced power handling capabilities of the substrate. As a result, GaN-on-SiC HEMTs are increasingly popular in radio-frequency power amplifiers, and applications as switches in high-frequency power electronics are of high interest. This paper explores the use of GaN-on-SiC HEMTs in conventional pulse-width modulated switched-mode power converters targeting switching frequencies in the tens of megahertz range. Device sizing and efficiency limits of this technology are analyzed, and design principles and guidelines are given to exploit the capabilities of the devices. The results are presented for discrete-device and integrated implementations of a synchronous Buck converter, providing more than 10-W output power supplied from up to 40 V with efficiencies greater than 95% when operated at 10 MHz, and greater than 90% at switching frequencies up to 40 MHz. As a practical application of this technology, the converter is used to accurately track a 3-MHz bandwidth communication envelope signal with 92% efficiency.",
"title": ""
},
{
"docid": "4508f1adb03013497bc6d7c30d64fbcc",
"text": "Motivated by and grounded in observations of eye-gaze patterns in human-human dialogue, this study explores using eye-gaze patterns in managing human-computer dialogue. We developed an interactive system, iTourist, for city trip planning, which encapsulated knowledge of eye-gaze patterns gained from studies of human-human collaboration systems. User study results show that it was possible to sense users' interest based on eye-gaze patterns and manage computer information output accordingly. Study participants could successfully plan their trip with iTourist and positively rated their experience of using it. We demonstrate that eye-gaze could play an important role in managing future multimodal human-computer dialogues.",
"title": ""
},
{
"docid": "4239f9110973888c7eded81037c056b3",
"text": "The role of epistasis in the genetic architecture of quantitative traits is controversial, despite the biological plausibility that nonlinear molecular interactions underpin the genotype–phenotype map. This controversy arises because most genetic variation for quantitative traits is additive. However, additive variance is consistent with pervasive epistasis. In this Review, I discuss experimental designs to detect the contribution of epistasis to quantitative trait phenotypes in model organisms. These studies indicate that epistasis is common, and that additivity can be an emergent property of underlying genetic interaction networks. Epistasis causes hidden quantitative genetic variation in natural populations and could be responsible for the small additive effects, missing heritability and the lack of replication that are typically observed for human complex traits.",
"title": ""
},
{
"docid": "a3cb6d84445bea04c5da888d34928c94",
"text": "In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-the-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. Demo1 and code2 are provided.",
"title": ""
},
{
"docid": "623e62e756321d14bb552a1ef364e4a5",
"text": "With the wide deployment of smart card automated fare collection (SCAFC) systems, public transit agencies have been benefiting from huge volume of transit data, a kind of sequential data, collected every day. Yet, improper publishing and use of transit data could jeopardize passengers' privacy. In this paper, we present our solution to transit data publication under the rigorous differential privacy model for the Société de transport de Montréal (STM). We propose an efficient data-dependent yet differentially private transit data sanitization approach based on a hybrid-granularity prefix tree structure. Moreover, as a post-processing step, we make use of the inherent consistency constraints of a prefix tree to conduct constrained inferences, which lead to better utility. Our solution not only applies to general sequential data, but also can be seamlessly extended to trajectory data. To our best knowledge, this is the first paper to introduce a practical solution for publishing large volume of sequential data under differential privacy. We examine data utility in terms of two popular data analysis tasks conducted at the STM, namely count queries and frequent sequential pattern mining. Extensive experiments on real-life STM datasets confirm that our approach maintains high utility and is scalable to large datasets.",
"title": ""
},
{
"docid": "e1770da0f9fcb24e534a8eeab77fc960",
"text": "The performance of any system cannot be determined without knowing the workload, that is, the set of requests presented to the system. Workload characterization is the process by which we produce models that are capable of describing and reproducing the behavior of a workload. Such models are imperative to any performance related studies such as capacity planning, workload balancing, performance prediction and system tuning. In this paper, we survey workload characterization techniques used for several types of computer systems. We identify significant issues and concerns encountered during the characterization process and propose an augmented methodology for workload characterization as a framework. We believe that the surveyed case studies, the described characterization techniques, and the proposed framework give a good introduction to the topic, assist in exploring the different options of characterization tools that can be adopted, and provide general guidelines for deriving a good workload model suitable as an input to performance studies.",
"title": ""
},
{
"docid": "20daad42c2587043562f3864f9e888c2",
"text": "In recent years, deep neural network approaches have naturally extended to the video domain, in their simplest case by aggregating per-frame classifications as a baseline for action recognition. A majority of the work in this area extends from the imaging domain, leading to visual-feature heavy approaches on temporal data. To address this issue we introduce “Let’s Dance”, a 1000 video dataset (and growing) comprised of 10 visually overlapping dance categories that require motion for their classification. We stress the important of human motion as a key distinguisher in our work given that, as we show in this work, visual information is not sufficient to classify motion-heavy categories. We compare our datasets’ performance using imaging techniques with UCF-101 and demonstrate this inherent difficulty. We present a comparison of numerous state-of-theart techniques on our dataset using three different representations (video, optical flow and multi-person pose data) in order to analyze these approaches. We discuss the motion parameterization of each of them and their value in learning to categorize online dance videos. Lastly, we release this dataset (and its three representations) for the research community to use.",
"title": ""
}
] |
scidocsrr
|
09451a5858b0da29dd6ea17e4119bffb
|
Recent advances in techniques for hyperspectral image processing
|
[
{
"docid": "5d247482bb06e837bf04c04582f4bfa2",
"text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.",
"title": ""
}
] |
[
{
"docid": "177d78352dab39befe562d17d79315b4",
"text": "Having access to relevant patient data is crucial for clinical decision making. The data is often documented in unstructured texts and collected in the electronic health record. In this paper, we evaluate an approach to visualize information extracted from clinical documents by means of tag cloud. Tag clouds will be generated using a bag of word approach and by exploiting part of speech tags. For a real word data set comprising radiological reports, pathological reports and surgical operation reports, tag clouds are generated and a questionnaire-based study is conducted as evaluation. Feedback from the physicians shows that the tag cloud visualization is an effective and rapid approach to represent relevant parts of unstructured patient data. To handle the different medical narratives, we have summarized several possible improvements according to the user feedback and evaluation results.",
"title": ""
},
{
"docid": "470093535d4128efa9839905ab2904a5",
"text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.",
"title": ""
},
{
"docid": "381a11fe3d56d5850ec69e2e9427e03f",
"text": "We present an approximation algorithm that takes a pool of pre-trained models as input and produces from it a cascaded model with similar accuracy but lower average-case cost. Applied to state-of-the-art ImageNet classification models, this yields up to a 2x reduction in floating point multiplications, and up to a 6x reduction in average-case memory I/O. The auto-generated cascades exhibit intuitive properties, such as using lower-resolution input for easier images and requiring higher prediction confidence when using a computationally cheaper model.",
"title": ""
},
{
"docid": "7f5bc34cd08a09014cff1b07c2cf72d0",
"text": "This paper presents the RF telecommunications system designed for the New Horizons mission, NASA’s planned mission to Pluto, with focus on new technologies developed to meet mission requirements. These technologies include an advanced digital receiver — a mission-enabler for its low DC power consumption at 2.3 W secondary power. The receiver is one-half of a card-based transceiver that is incorporated with other spacecraft functions into an integrated electronics module, providing further reductions in mass and power. Other developments include extending APL’s long and successful flight history in ultrastable oscillators (USOs) with an updated design for lower DC power. These USOs offer frequency stabilities to 1 part in 10, stabilities necessary to support New Horizons’ uplink radio science experiment. In antennas, the 2.1 meter high gain antenna makes use of shaped suband main reflectors to improve system performance and achieve a gain approaching 44 dBic. New Horizons would also be the first deep-space mission to fly a regenerative ranging system, offering up to a 30 dB performance improvement over sequential ranging, especially at long ranges. The paper will provide an overview of the current system design and development and performance details on the new technologies mentioned above. Other elements of the telecommunications system will also be discussed. Note: New Horizons is NASA’s planned mission to Pluto, and has not been approved for launch. All representations made in this paper are contingent on a decision by NASA to go forward with the preparation for and launch of the mission.",
"title": ""
},
{
"docid": "c21e39d4cf8d3346671ae518357c8edb",
"text": "The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.",
"title": ""
},
{
"docid": "3f0d37296258c68a20da61f34364405d",
"text": "Need to develop human body's posture supervised robots, gave the push to researchers to think over dexterous design of exoskeleton robots. It requires to develop quantitative techniques to assess motor function and generate the command for the robots to act accordingly with complex human structure. In this paper, we present a new technique for the upper limb power exoskeleton robot in which load is gripped by the human subject and not by the robot while the robot assists. Main challenge is to find non-biological signal based human desired motion intention to assist as needed. For this purpose, we used newly developed Muscle Circumference Sensor (MCS) instead of electromyogram (EMG) sensors. MCS together with the force sensors is used to estimate the human interactive force from which desired human motion is extracted using adaptive Radial Basis Function Neural Network (RBFNN). Developed Upper limb power exoskeleton has seven degrees of freedom (DOF) in which five DOF are passive while two are active. Active joints include shoulder and elbow in Sagittal plane while abduction and adduction motion in shoulder joint is provided by the passive joints. To ensure high quality performance model reference based adaptive impedance controller is employed. Exoskeleton performance is evaluated experimentally by a neurologically intact subject which validates the effectiveness.",
"title": ""
},
{
"docid": "76f60b9e5e894d8bd150a90f6db660a0",
"text": "There has been significant progress in recognition of outdoor scenes but indoor scene recognition is still an challenge. This is due to the high appearance fluctuation of indoor situations. With the recent developments in indoor and mobile robotics, identifying the indoor scenes has gained importance. Many approaches have been proposed to detect scenes using object detection and geotags. In contrast, the proposal of this paper uses the convolutional neural network which has gained importance with advancement in machine learning methodologies. Our method has higher efficiency than the existing models as we try to classify the environment as a whole rather than using object identification for the same. We test this approach on our dataset which consists of RGB and also depth images of common locations present in academic environments such as class rooms, labs etc. The proposed approach performs better than previous ones with accuracy up to 98%.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "b6bec5e17f8edae3ccd9df5617dce52e",
"text": "This technical report describes CHERI ISAv6, the sixth version of the Capability Hardware Enhanced RISC Instructions (CHERI) Instruction-Set Architecture (ISA)1 being developed by SRI International and the University of Cambridge. This design captures seven years of research, development, experimentation, refinement, formal analysis, and validation through hardware and software implementation. CHERI ISAv6 is a substantial enhancement to prior ISA versions: it introduces support for kernel-mode compartmentalization, jump-based rather than exception-based domain transition, architecture-abstracted and efficient tag restoration, and more efficient generated code. A new chapter addresses potential applications of the CHERI model to the RISC-V and x86-64 ISAs, previously described relative only to the 64-bit MIPS ISA. CHERI ISAv6 better explains our design rationale and research methodology. CHERI is a hybrid capability-system architecture that adds new capability-system primitives to a commodity 64-bit RISC ISA enabling software to efficiently implement fine-grained memory protection and scalable software compartmentalization. Design goals have included incremental adoptability within current ISAs and software stacks, low performance overhead for memory protection, significant performance improvements for software compartmentalization, formal grounding, and programmer-friendly underpinnings. Throughout, we have focused on providing strong and efficient architectural foundations for the principles of least privilege and intentional use in the execution of software at multiple levels of abstraction, preventing and mitigating vulnerabilities. The CHERI system architecture purposefully addresses known performance and robustness gaps in commodity ISAs that hinder the adoption of more secure programming models centered around the principle of least privilege. To this end, CHERI blends traditional paged virtual memory with an in-address-space capability model that includes capability registers, capability instructions, and tagged memory. CHERI builds on C-language fat-pointer literature: its capabilities describe fine-grained regions of memory and can be substituted for data or code pointers in generated code, protecting data and also improving control-flow robustness. Strong capability integrity and monotonicity properties allow the CHERI model to express a variety of protection properties, from enforcing valid C-language pointer provenance and bounds checking to implementing the isolation and controlled communication structures required for software compartmentalization. CHERI’s hybrid capability-system approach, inspired by the Capsicum security model, allows incremental adoption of capability-oriented design: software implementations that are more robust and resilient can be deployed where they are most needed, while leaving less critical software largely unmodified, but nevertheless suitably constrained to be incapable of having adverse effects. Potential deployment scenarios include low-level software Trusted Computing Bases (TCBs) such as separation kernels, hypervisors, and operating-system kernels, as well as userspace TCBs such as language runtimes and web browsers. Likewise, we see early-use scenarios (such as data compression, protocol parsing, and image processing) that relate to particularly high-risk software libraries, which are concentrations of both complex and historically vulnerability-prone code exposed to untrustworthy data sources, while leaving containing applications unchanged. 1We have attempted to avoid confusion among three rather different uses of the word ‘architecture’. The ISA specifies the interface between hardware and software, rather than describing either the (micro-)architecture of a particular hardware prototype, or laying out the total-system hardware-software architecture.",
"title": ""
},
{
"docid": "e2b173a7ca137f2ecc8dd952a004c5c5",
"text": "The clinical approach towards the midface is one of the most important interventions for practitioners when treating age-related changes of the face. Currently a plethora of procedures are used and presented. However, few of these approaches have been validated or passed review board assigned evaluations. Therefore, it is the aim of this work to establish a guideline manual for practitioners for a safe and effective mid-face treatment based on the most current concepts of facial anatomy. The latter is based on the 5-layered structural arrangement and its understanding is the key towards the favoured outcome and for minimizing complications.",
"title": ""
},
{
"docid": "711c56cad778337510bcf1629f6293cc",
"text": "Media-related commercial marketing aimed at promoting the purchase of products and services by children, and by adults for children, is ubiquitous and has been associated with negative health consequences such as poor nutrition and physical inactivity. But, as Douglas Evans points out, not all marketing in the electronic media is confined to the sale of products. Increasingly savvy social marketers have begun to make extensive use of the same techniques and strategies used by commercial marketers to promote healthful behaviors and to counter some of the negative effects of conventional media marketing to children and adolescents. Evans points out that social marketing campaigns have been effective in helping to prevent and control tobacco use, increase physical activity, improve nutrition, and promote condom use, as well as other positive health behaviors. He reviews the evidence from a number of major recent campaigns and programming in the United States and overseas and describes the evaluation and research methods used to determine their effectiveness. He begins his review of the field of social marketing by describing how it uses many of the strategies practiced so successfully in commercial marketing. He notes the recent development of public health brands and the use of branding as a health promotion strategy. He then goes on to show how social marketing can promote healthful behavior, how it can counter media messages about unhealthful behavior, and how it can encourage discussions between parents and children. Evans concludes by noting some potential future applications to promote healthful media use by children and adolescents and to mitigate the effects of exposure to commercial marketing. These include adapting lessons learned from previous successful campaigns, such as delivering branded messages that promote healthful alternative behaviors. Evans also outlines a message strategy to promote \"smart media use\" to parents, children, and adolescents and suggests a brand based on personal interaction as a desirable alternative to \"virtual interaction\".",
"title": ""
},
{
"docid": "31d2e56c01f53c25c6c9bfcabe21fcbe",
"text": "In this paper, we propose a novel computer vision-based fall detection system for monitoring an elderly person in a home care, assistive living application. Initially, a single camera covering the full view of the room environment is used for the video recording of an elderly person's daily activities for a certain time period. The recorded video is then manually segmented into short video clips containing normal postures, which are used to compose the normal dataset. We use the codebook background subtraction technique to extract the human body silhouettes from the video clips in the normal dataset and information from ellipse fitting and shape description, together with position information, is used to provide features to describe the extracted posture silhouettes. The features are collected and an online one class support vector machine (OCSVM) method is applied to find the region in feature space to distinguish normal daily postures and abnormal postures such as falls. The resultant OCSVM model can also be updated by using the online scheme to adapt to new emerging normal postures and certain rules are added to reduce false alarm rate and thereby improve fall detection performance. From the comprehensive experimental evaluations on datasets for 12 people, we confirm that our proposed person-specific fall detection system can achieve excellent fall detection performance with 100% fall detection rate and only 3% false detection rate with the optimally tuned parameters. This work is a semiunsupervised fall detection system from a system perspective because although an unsupervised-type algorithm (OCSVM) is applied, human intervention is needed for segmenting and selecting of video clips containing normal postures. As such, our research represents a step toward a complete unsupervised fall detection system.",
"title": ""
},
{
"docid": "03726ab44d068b69eb361a1603db05b9",
"text": "Nowadays, cybercrime is growing rapidly around the world, as new technologies, applications and networks emerge. In addition, the Deep Web has contributed to the growth of illegal activities in cyberspace. As a result, cybercriminals are taking advantage of system vulnerabilities for their own benefit. This article presents the history and conceptualization of cybercrime, explores different categorizations of cybercriminals and cyberattacks, and sets forth our exhaustive cyberattack typology, or taxonomy. Common categories include where the computer is the target to commit the crime, where the computer is used as a tool to perpetrate the felony, or where a digital device is an incidental condition to the execution of a crime. We conclude our study by analyzing lessons learned and future actions that can be undertaken to tackle cybercrime and harden cybersecurity at all levels.",
"title": ""
},
{
"docid": "7130731b6603e4be28e8503c185176f2",
"text": "CAViAR is a mobile software system for indoor environments that provides to the mobile user equipped with a smartphone indoor localization, augmented reality (AR), visual interaction, and indoor navigation. These capabilities are possible with the availability of state of the art AR technologies. The mobile application includes additional features, such as indoor maps, shortest path, inertial navigation, places of interest, location sharing and voice-commanded search. CAViAR was tested in a University Campus as one of the technologies to be used later in an intelligent Campus environment.",
"title": ""
},
{
"docid": "b114ba10874b57682ee6a14d3f04d469",
"text": "Mobile delay tolerant network (MDTN) is a kind of no stabilized end-to-end connection network and has the characteristics of long time delay and intermittent interruption. To forward a network packet, MDTN relies on the relay nodes, using the “store — carry — forwards” routing method. However, nodes will be selfish and unwilling to forward messages for others due to the limited resources such as energy, storage space and bandwidth. Therefore, it is necessary to bring in incentive mechanism to motivate the selfish nodes to cooperatively forward messages. In this paper, we divide present incentive mechanisms into three categories: reputation-based scheme, tit-for-tat (TFT)-based incentive scheme and credit-based incentive scheme. Then we qualitatively analyze and compare typical incentive mechanisms have been proposed. Finally, we make a conclusion and point out the inadequacies in present incentive mechanisms under MDTN.",
"title": ""
},
{
"docid": "5717c8148c93b18ec0e41580a050bf3a",
"text": "Verifiability is one of the core editing principles in Wikipedia, editors being encouraged to provide citations for the added content. For a Wikipedia article, determining the citation span of a citation, i.e. what content is covered by a citation, is important as it helps decide for which content citations are still missing. We are the first to address the problem of determining the citation span in Wikipedia articles. We approach this problem by classifying which textual fragments in an article are covered by a citation. We propose a sequence classification approach where for a paragraph and a citation, we determine the citation span at a finegrained level. We provide a thorough experimental evaluation and compare our approach against baselines adopted from the scientific domain, where we show improvement for all evaluation metrics.",
"title": ""
},
{
"docid": "f282a0e666a2b2f3f323870fc07217bd",
"text": "The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.",
"title": ""
},
{
"docid": "605a078c74d37007654094b4b426ece8",
"text": "Currently, blockchain technology, which is decentralized and may provide tamper-resistance to recorded data, is experiencing exponential growth in industry and research. In this paper, we propose the MIStore, a blockchain-based medical insurance storage system. Due to blockchain’s the property of tamper-resistance, MIStore may provide a high-credibility to users. In a basic instance of the system, there are a hospital, patient, insurance company and n servers. Specifically, the hospital performs a (t, n)-threshold MIStore protocol among the n servers. For the protocol, any node of the blockchain may join the protocol to be a server if the node and the hospital wish. Patient’s spending data is stored by the hospital in the blockchain and is protected by the n servers. Any t servers may help the insurance company to obtain a sum of a part of the patient’s spending data, which servers can perform homomorphic computations on. However, the n servers cannot learn anything from the patient’s spending data, which recorded in the blockchain, forever as long as more than n − t servers are honest. Besides, because most of verifications are performed by record-nodes and all related data is stored at the blockchain, thus the insurance company, servers and the hospital only need small memory and CPU. Finally, we deploy the MIStore on the Ethererum blockchain and give the corresponding performance evaluation.",
"title": ""
},
{
"docid": "9ba6a2042e99c3ace91f0fc017fa3fdd",
"text": "This paper proposes a two-element multi-input multi-output (MIMO) open-slot antenna implemented on the display ground plane of a laptop computer for eight-band long-term evolution/wireless wide-area network operations. The metal surroundings of the antennas have been well integrated as a part of the radiation structure. In the single-element open-slot antenna, the nearby hinge slot (which is bounded by two ground planes and two hinges) is relatively large as compared with the open slot itself and acts as a good radiator. In the MIMO antenna consisting of two open-slot elements, a T slot is embedded in the display ground plane and is connected to the hinge slot. The T and hinge slots when connected behave as a radiator; whereas, the T slot itself functions as an isolation element. With the isolation element, simulated isolations between the two elements of the MIMO antenna are raised from 8.3–11.2 to 15–17.1 dB in 698–960 MHz and from 12.1–21 to 15.9–26.7 dB in 1710–2690 MHz. Measured isolations with the isolation element in the desired low- and high-frequency ranges are 17.6–18.8 and 15.2–23.5 dB, respectively. Measured and simulated efficiencies for the two-element MIMO antenna with either element excited are both larger than 50% in the desired operating frequency bands.",
"title": ""
},
{
"docid": "544591326b250f5d68a64f793d55539b",
"text": "Introduction: Exfoliative cheilitis, one of a spectrum of diseases that affect the vermilion border of the lips, is uncommon and has no known cause. It is a chronic superficial inflammatory disorder of the vermilion borders of the lips characterized by persistent scaling; it can be a difficult condition to manage. The diagnosis is now restricted to those few patients whose lesions cannot be attributed to other causes, such as contact sensitization or light. Case Report: We present a 17 year-old male presented to the out clinic in Baghdad with the chief complaint of a persistent scaly on his lower lips. The patient reported that the skin over the lip thickened gradually over a 3 days period and subsequently became loose, causing discomfort. Once he peeled away the loosened layer, a new layer began to form again. Conclusion: The lack of specific treatment makes exfoliative cheilitis a chronic disease that radically affects a person’s life. The aim of this paper is to describe a case of recurrent exfoliative cheilitis successfully treated with intralesional corticosteroids and to present possible hypotheses as to the cause.",
"title": ""
}
] |
scidocsrr
|
383d3932f49d674b625568a3e6666d21
|
Designing urban media façades: cases and challenges
|
[
{
"docid": "31ff39eb4322e9856f56729a4d068b73",
"text": "Using media façades as a subcategory of urban computing, this paper contributes to the understanding of spatial interaction, sense-making, and social mediation as part of identifying key characteristics of interaction with media façades. Our research addresses in particular the open-ended but framed nature of interaction, which in conjunction with varying interpretations enables individual sense-making. Moreover, we contribute to the understanding of flexible social interaction by addressing urban interaction in relation to distributed attention, shared focus, dialogue and collective action. Finally we address challenges for interaction designers encountered in a complex spatial setting calling for a need to take into account multiple viewing and action positions. Our researchthrough-design approach has included a real-life design intervention in terms of the design, implementation, and reflective evaluation of a 180 m (1937 square feet) interactive media façade in operation 24/7 for more than 50 days.",
"title": ""
},
{
"docid": "3cd39df3222b44989bc3c1e3c66a386e",
"text": "In interaction design for experience-oriented uses of technology, a central facet of aesthetics of interaction is rooted in the user's experience of herself “performing her perception.” By drawing on performance (theater) theory, phenomenology and sociology and with references to recent HCI-work on the relation between the system and the performer/user and the spectator's relation to this dynamic, we show how the user is simultaneously operator, performer and spectator when interacting. By engaging with the system, she continuously acts out these three roles and her awareness of them is crucial in her experience. We argue that this 3-in-1 is always already shaping the user's understanding and perception of her interaction as it is staged through her experience of the object's form and expression. Through examples ranging from everyday technologies utilizing performances of interaction to spatial contemporary artworks, digital as well as analogue, we address the notion of the performative spectator and the spectating performer. We demonstrate how perception is also performative and how focus on this aspect seems to be crucial when designing experience-oriented products, systems and services.",
"title": ""
}
] |
[
{
"docid": "13db8cca0c58bb14a09effdf08cf909c",
"text": "This study aimed to compare the vertical marginal gap of teeth restored with lithium disilicate crowns fabricated using CAD/CAM or by pressed ceramic approach. Twenty mandibular third molar teeth were collected after surgical extractions and prepared to receive full veneer crowns. Teeth were optically scanned and lithium disilicate blocks were used to fabricate crowns using CAD/CAM technique. Polyvinyl siloxane impressions of the prepared teeth were made and monolithic pressed lithium disilicate crowns were fabricated. The marginal gap was measured using optical microscope at 200× magnification (Keyence VHX-5000, Japan). Statistical analysis was performed using Wilcoxon test. The lithium disilicate pressed crowns had significantly smaller (p = 0.006) marginal gaps (38 ± 12 μm) than the lithium disilicate CAD/CAM crowns (45 ± 12 μm). This research indicates that lithium disilicate crowns fabricated with the press technique have measurably smaller marginal gaps compared with those fabricated with CAD/CAM technique within in vitro environments. The marginal gaps achieved by the crowns across all groups were within a clinically acceptable range.",
"title": ""
},
{
"docid": "e8b29527805a29dfe12c22643345e440",
"text": "Highly cited articles are interesting because of the potential association between high citation counts and high quality research. This study investigates the 82 most highly cited Information Science and Library Science’ (IS&LS) articles (the top 0.1%) in the Web of Science from the perspectives of disciplinarity, annual citation patterns, and first author citation profiles. First, the relative frequency of these 82 articles was much lower for articles solely in IS&LS than for those in IS&LS and at least one other subject, suggesting that that the promotion of interdisciplinary research in IS&LS may be conducive to improving research quality. Second, two thirds of the first authors had an h-index in IS&LS of less than eight, show that much significant research is produced by researchers without a high overall IS&LS research productivity. Third, there is a moderate correlation (0.46) between citation ranking and the number of years between peak year and year of publication. This indicates that high quality ideas and methods in IS&LS often are deployed many years after being published.",
"title": ""
},
{
"docid": "ccbf1f33f16e7c5283f6f7cbb51d0edd",
"text": "This paper reviews current research on supply chain management (SCM) within the context of tourism. SCM in the manufacturing industry has attracted widespread research interest over the past two decades, whereas studies of SCM in the tourism industry are very limited. Stakeholders in the tourism industry interact with each other to resolve their divergent business objectives across different operating systems. The potential benefit of considering not only individual enterprises but also the tourism value chain becomes evident. The paper examines the characteristics of tourism products, and identifies and explores core issues and concepts in tourism supply chains (TSCs) and tourism supply chain management (TSCM). Although there is an emerging literature on TSCM or its equivalents, progress is uneven, as most research focuses on distribution and marketing activities without fully considering the whole range of different suppliers involved in the provision and consumption of tourism products. This paper provides a systematic review of current tourism studies from the TSCM perspective and develops a framework for TSCM research that should be of great value not only to those who wish to extend their research into this new and exciting area, but also to tourism and hospitality decision makers. The paper also identifies key research questions in TSCM worthy of future theoretical and empirical exploration. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d13ce7762aeded7a40a7fbe89f1beccf",
"text": "[Purpose] This study aims to examined the effect of the self-myofascial release induced with a foam roller on the reduction of stress by measuring the serum concentration of cortisol. [Subjects and Methods] The subjects of this study were healthy females in their 20s. They were divided into the experimental and control groups. Both groups, each consisting of 12 subjects, were directed to walk for 30 minutes on a treadmill. The control group rested for 30 minutes of rest by lying down, whereas the experimental group was performed a 30 minutes of self-myofascial release program. [Results] Statistically significant levels of cortisol concentration reduction were observed in both the experimental group, which used the foam roller, and the control group. There was no statistically significant difference between the two groups. [Conclusion] The Self-myofascial release induced with a foam roller did not affect the reduction of stress.",
"title": ""
},
{
"docid": "47084e8587696dc9d392d895a99ddb83",
"text": "We present an online approach to efficiently and simultaneously detect and track the 2D pose of multiple people in a video sequence. We build upon Part Affinity Field (PAF) representation designed for static images, and propose an architecture that can encode and predict Spatio-Temporal Affinity Fields (STAF) across a video sequence. In particular, we propose a novel temporal topology cross-linked across limbs which can consistently handle body motions of a wide range of magnitudes. Additionally, we make the overall approach recurrent in nature, where the network ingests STAF heatmaps from previous frames and estimates those for the current frame. Our approach uses only online inference and tracking, and is currently the fastest and the most accurate bottom-up approach that is runtime invariant to the number of people in the scene and accuracy invariant to input frame rate of camera. Running at ∼30 fps on a single GPU at single scale, it achieves highly competitive results on the PoseTrack benchmarks. 1",
"title": ""
},
{
"docid": "ae6e93de72d10589551b441eaf5077ae",
"text": "The interest in cloud computing has increased rapidly in the last two decades. This increased interest is attributed to the important role played by cloud computing in the various aspects of our life. Cloud computing is recently emerged as a new paradigm for hosting and delivering services over the Internet. It is attractive to business owners as well as to researchers as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. As cloud computing is done through the Internet, it faces several kinds of threats due to its nature, where it depends on the network and its users who are distributed around the world. These threats differ in type, its side effect, its reasons, and its main purposes. This survey presents the most critical threats to cloud computing with its impacts, its reasons, and some suggested solutions. In addition, this survey determines what the main aspects of the cloud and the security attributes that are affected by each one of these threats. As a result of this survey, we order the most critical threats according to the level of its impact.",
"title": ""
},
{
"docid": "505137d61a0087e054a2cf09c8addb4b",
"text": "A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs.",
"title": ""
},
{
"docid": "2ed183563bd5cdaafa96b03836883730",
"text": "This is an introduction to the Classic Paper on MOSFET scaling by R. Dennardet al., “Design of Ion-Implanted MOSFET’s with Very Small Physical Dimensions,” published in the IEEE Journal of Solid-State Circuitsin October 1974. The history of scaling and its application to very large scale integration (VLSI) MOSFET technology is traced from 1970 to 1998. The role of scaling in the profound improvements in power delay product over the last three decades is analyzed in basic terms.",
"title": ""
},
{
"docid": "d880349c2760a8cd71d86ea3212ba1f0",
"text": "As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.",
"title": ""
},
{
"docid": "6f3985fa6c66bca088394947e0db9e28",
"text": "This paper aims to check if the current and prompt technological revolution altering the whole world has crucial impacts on the Tunisian banking sector. Particularly, this study seeks some clues on which we can rely in order to understand the customers’ behavior regarding the adoption of electronic banking. To achieve this purpose, an empirical research is carried out in Tunisia and it reveals that a panoply of factors is affecting the customers-attitude toward e-banking. For instance; age, gender and educational qualifications seem to be important and they split up the group into electronic banking adopters and traditional banking defenders and so, they have significant influence on the customers’ adoption of e-banking. Furthermore, this study shows that despite the presidential incentives and in spite of being fully aware of the e-banking’s benefits, numerous respondents are still using the conventional banking. It is worthy to mention that the fear of loss because of transactions errors or hackers plays a significant role in alienating Tunisian customers from online banking. Finally, a number of this study’s limitations are highlighted and some research perspectives are suggested. JIBC December 2009, Vol. 14, No. 3 2",
"title": ""
},
{
"docid": "a04c057110048695669feef07638ef3c",
"text": "The structure of recent models of the relationship between natural resource abundance or intensity and economic growth is nearly always the same. An abundance of or heavy dependence on natural resources is taken to influence some variable or mechanism “X” which impedes growth. An important challenge for economic growth theorists and empirical workers in the field is to identify and map these intermediate variables and mechanisms. To date, four main channels of transmission from natural resource abundance or intensity to slow economic growth have been suggested in the literature. As we shall see, these channels can be described as crowding out: natural capital, it will be argued, tends to crowd out other types of capital and thereby inhibit economic growth.",
"title": ""
},
{
"docid": "c28431406873b682a5dabb8a8fed510f",
"text": "Business Intelligence (BI) tools provide fundamental support for analyzing large volumes of information. Data Warehouses (DW) and Online Analytical Processing (OLAP) tools are used to store and analyze data. Nowadays more and more information is available on the Web in the form of Resource Description Framework (RDF), and BI tools have a huge potential of achieving better results by integrating real-time data from web sources into the analysis process. In this paper, we describe a framework for so-called exploratory OLAP over RDF sources. We propose a system that uses a multidimensional schema of the OLAP cube expressed in RDF vocabularies. Based on this information the system is able to query data sources, extract and aggregate data, and build a cube. We also propose a computer-aided process for discovering previously unknown data sources and building a multidimensional schema of the cube. We present a use case to demonstrate the applicability of the approach.",
"title": ""
},
{
"docid": "69c9aa877b9416e2a884eaa5408eb890",
"text": "Integrating trust and automation in finance.",
"title": ""
},
{
"docid": "38a4f83778adea564e450146060ef037",
"text": "The last few years have seen a surge in the number of accurate, fast, publicly available dependency parsers. At the same time, the use of dependency parsing in NLP applications has increased. It can be difficult for a non-expert to select a good “off-the-shelf” parser. We present a comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of English. For our analysis, we developed a new web-based tool that gives a convenient way of comparing dependency parser outputs. Our analysis will help practitioners choose a parser to optimize their desired speed/accuracy tradeoff, and our tool will help practitioners examine and compare parser output.",
"title": ""
},
{
"docid": "d484c24551191360bc05b768e2fa9957",
"text": "The paper aims to develop and design a cloud-based Quran portal using Drupal technology and make it available in multiple services. The portal can be hosted on cloud and users around the world can access it using any Internet enabled device. The proposed portal includes different features to become a center of learning resources for various users. The portal is further designed to promote research and development of new tools and applications includes Application Programming Interface (API) and Search API, which exposes the search to public, and make the searching Quran efficient and easy. The cloud application can request various surah or ayah using the API and by passing filter.",
"title": ""
},
{
"docid": "fcf7f7562fe3e01bba64a61b7f54b04c",
"text": "IMPORTANCE\nBoth bullies and victims of bullying are at risk for psychiatric problems in childhood, but it is unclear if this elevated risk extends into early adulthood.\n\n\nOBJECTIVE\nTo test whether bullying and/or being bullied in childhood predicts psychiatric problems and suicidality in young adulthood after accounting for childhood psychiatric problems and family hardships.\n\n\nDESIGN\nProspective, population-based study.\n\n\nSETTING\nCommunity sample from 11 counties in Western North Carolina.\n\n\nPARTICIPANTS\nA total of 1420 participants who had being bullied and bullying assessed 4 to 6 times between the ages of 9 and 16 years. Participants were categorized as bullies only, victims only, bullies and victims (hereafter referred to as bullies/victims), or neither.\n\n\nMAIN OUTCOME MEASURE\nPsychiatric outcomes, which included depression, anxiety, antisocial personality disorder, substance use disorders, and suicidality (including recurrent thoughts of death, suicidal ideation, or a suicide attempt), were assessed in young adulthood (19, 21, and 24-26 years) by use of structured diagnostic interviews. RESULTS Victims and bullies/victims had elevated rates of young adult psychiatric disorders, but also elevated rates of childhood psychiatric disorders and family hardships. After controlling for childhood psychiatric problems or family hardships, we found that victims continued to have a higher prevalence of agoraphobia (odds ratio [OR], 4.6 [95% CI, 1.7-12.5]; P < .01), generalized anxiety (OR, 2.7 [95% CI, 1.1-6.3]; P < .001), and panic disorder (OR, 3.1 [95% CI, 1.5-6.5]; P < .01) and that bullies/victims were at increased risk of young adult depression (OR, 4.8 [95% CI, 1.2-19.4]; P < .05), panic disorder (OR, 14.5 [95% CI, 5.7-36.6]; P < .001), agoraphobia (females only; OR, 26.7 [95% CI, 4.3-52.5]; P < .001), and suicidality (males only; OR, 18.5 [95% CI, 6.2-55.1]; P < .001). Bullies were at risk for antisocial personality disorder only (OR, 4.1 [95% CI, 1.1-15.8]; P < .04).\n\n\nCONCLUSIONS AND RELEVANCE\nThe effects of being bullied are direct, pleiotropic, and long-lasting, with the worst effects for those who are both victims and bullies.",
"title": ""
},
{
"docid": "27bba2c0a5d3d7f3260b64c3fb0ef4f6",
"text": "Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only give a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The \"central hit strategy\" selectively targets central nodes/edges of the flexible networks of infectious agents or cancer cells to kill them. The \"network influence strategy\" works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved by targeting the neighbors of central nodes/edges. It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing >1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.",
"title": ""
},
{
"docid": "70a8de504b5ab8cdea1b87ab6028a3f3",
"text": "There are two major challenges in universal stair climbing: stairs without riser and with nose, and stairs with various dimensions. In this study, we proposed an indoor robot platform to overcome these challenges. First, to create an angle of attack, the Tusk, a passive, protruded element, was added in front of a 4-wheel robot. For design analysis and optimization of the Tusk, a simplified model of universal stair climbing was applied. To accommodate stairs without risers and with nose, the assistive track mechanism was applied. To climb the stair regardless of its dimension, length-adjustable mechanism was added. The results indicated the robot with these mechanisms successfully overcame each challenge. The performance was better than most conventional stair-climbing robots in terms of the range of compatible stairs. We expect these new approaches to expand the range of indoor robot operation with minimal cost.",
"title": ""
},
{
"docid": "14520419a4b0e27df94edc4cf23cde65",
"text": "In this paper we propose and examine non–parametric statistical tests to define similarity and homogeneity measure s for textures. The statistical tests are applied to the coeffi cients of images filtered by a multi–scale Gabor filter bank. We will demonstrate that these similarity measures are useful for both, texture based image retrieval and for unsupervised texture segmentation, and hence offer an unified approach to these closely related tasks. We present results on Brodatz–like micro–textures and a collection of real–word images.",
"title": ""
},
{
"docid": "a18ef88938a0d391874a8be61c27694a",
"text": "A growing body of literature has emerged that focuses upon cognitive assessment of video game player experience. Given the growing popularity of video gaming and the increasing literature on cognitive aspects of video gamers, there is a growing need for novel approaches to assessment of the cognitive processes that occur while persons are immersed in video games. In this study, we assessed various stimulus modalities and gaming events using an off-the-shelf EEG devise. A significant difference was found among different stimulus modalities with increasingly difficult cognitive demands. Specifically, beta and gamma power were significantly increased during high intensity events when compared to low intensity gaming events. Our findings suggest that the Emotiv EEG can be used to differentiate between varying stimulus modalities and accompanying cognitive processes. 2015 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
a165e03a4a82d5567d3c3d07dd3c5899
|
Appearance contrast for fast, robust trail-following
|
[
{
"docid": "1589e72380265787a10288c5ad906670",
"text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.",
"title": ""
}
] |
[
{
"docid": "91c0658dbd6f078fdf53e9ae276a6f73",
"text": "Given a photo collection of \"unconstrained\" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.",
"title": ""
},
{
"docid": "fcb36c8cd9d947aaf52e927acd49a453",
"text": "In recent years, Blockchain technology has been highly valued, and the related applications have begun to be developed in large numbers. A smart contract is a software component encompass business logics and transactions that run on a blockchain. Thus, verifying whether the contract logics fully reflect the business requirements are one of the most important software engineering issues in blockchain application development. Currently, developing smart contracts is still a challenging task even for experienced programmers due to the lacking of an integrated tool for developing and testing. In response to this challenge, this paper presents a service platform that supports BDD-style (Behavior-Driven Development) smart contract development, testing, and deployment for the Ethereum-based blockchains. This platform focuses on providing and resolving the cross-cutting concerns across the life-cycle of smart contract development. The feasibility of this platform is shown by demonstrating how an application scenario, namely, loyalty points exchange, can be implemented using the proposed platform. Our experiences indicate that the burdens of developers when developing smart contracts can be effectively reduced and thus increases the quality of contracts.",
"title": ""
},
{
"docid": "02c687cbe7961f082c60fad1cc3f3f80",
"text": "The simplicity of Transpose Jacobian (TJ) control is a significant characteristic of this algorithm for controlling robotic manipulators. Nevertheless, a poor performance may result in tracking of fast trajectories, since it is not dynamics-based. Use of high gains can deteriorate performance seriously in the presence of feedback measurement noise. Another drawback is that there is no prescribed method of selecting its control gains. In this paper, based on feedback linearization approach a Modified TJ (MTJ) algorithm is presented which employs stored data of the control command in the previous time step, as a learning tool to yield improved performance. The gains of this new algorithm can be selected systematically, and do not need to be large, hence the noise rejection characteristics of the algorithm are improved. Based on Lyapunov’s theorems, it is shown that both the standard and the MTJ algorithms are asymptotically stable. Analysis of the required computational effort reveals the efficiency of the proposed MTJ law compared to the Model-based algorithms. Simulation results are presented which compare tracking performance of the MTJ algorithm to that of the TJ and Model-Based algorithms in various tasks. Results of these simulations show that performance of the new MTJ algorithm is comparable to that of Computed Torque algorithms, without requiring a priori knowledge of plant dynamics, and with reduced computational burden. Therefore, the proposed algorithm is well suited to most industrial applications where simple efficient algorithms are more appropriate than complicated theoretical ones with massive computational burden. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5cd29a37f0357aa242244aef4d12a87d",
"text": "LEARNING OBJECTIVES\nAfter studying this article, the participant should be able to: 1. Describe the alternatives for auricular reconstruction. 2. Discuss the pros and cons of autogenous reconstruction of total or subtotal auricular defects. 3. Enumerate the indications for prosthetic reconstruction of total or subtotal auricular defects. 4. Understand the complexity of and the expertise required for prosthetic reconstruction of auricular defects. The indications for autogenous auricular reconstruction versus prosthetic reconstruction with osseointegrated implant-retained prostheses were outlined in Plastic and Reconstructive Surgery in 1994 by Wilkes et al. of Canada, but because of the relatively recent Food and Drug Administration approval (1995) of extraoral osseointegrated implants, these indications had not been examined by a surgical unit in the United States. The purpose of this article is to present an evolving algorithm based on an experience with 98 patients who underwent auricular reconstruction over a 10-year period. From this experience, the authors conclude that autogenous reconstruction is the procedure of choice in the majority of pediatric patients with microtia. Prosthetic reconstruction of the auricle is considered in such pediatric patients with congenital deformities for the following three relative indications: (1) failed autogenous reconstruction, (2) severe soft-tissue/skeletal hypoplasia, and/or (3) a low or unfavorable hairline. A fourth, and in our opinion the ideal, indication for prosthetic ear reconstruction is the acquired total or subtotal auricular defect, most often traumatic or ablative in origin, which is usually encountered in adults. Although prosthetic reconstruction requires surgical techniques that are less demanding than autogenous reconstruction, construction of the prosthesis is a time-consuming task requiring experience and expertise. Although autogenous reconstruction presents a technical challenge to the surgeon, it is the prosthetic reconstruction that requires lifelong attention and may be associated with late complications. This article reports the first American series of auricular reconstruction containing both autogenous and prosthetic methods by a single surgical team.",
"title": ""
},
{
"docid": "c67010d61ec7f9ea839bbf7d2dce72a1",
"text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. More recently, there have been proposals to explore mmWave spectrum (3-300GHz) for commercial mobile applications due to its unique advantages such as spectrum availability and small component sizes. In this paper, we discuss system design aspects such as antenna array design, base station and mobile station requirements. We also provide system performance and SINR geometry results to demonstrate the feasibility of an outdoor mmWave mobile broadband communication system. We note that with adaptive antenna array beamforming, multi-Gbps data rates can be supported for mobile cellular deployments.",
"title": ""
},
{
"docid": "b9ad751e5b7e46fd79848788b10d7ab9",
"text": "In this paper, we propose a cross-lingual convolutional neural network (CNN) model that is based on word and phrase embeddings learned from unlabeled data in two languages and dependency grammar. Compared to traditional machine translation (MT) based methods for cross lingual sentence modeling, our model is much simpler and does not need parallel corpora or language specific features. We only use a bilingual dictionary and dependency parser. This makes our model particularly appealing for resource poor languages. We evaluate our model using English and Chinese data on several sentence classification tasks. We show that our model achieves a comparable and even better performance than the traditional MT-based method.",
"title": ""
},
{
"docid": "0b245fedd608d21389372faa192d62a0",
"text": "This paper explores the effectiveness of Data Mining (DM) classification techniques in detecting firms that issue fraudulent financial statements (FFS) and deals with the identification of factors associated to FFS. In accomplishing the task of management fraud detection, auditors could be facilitated in their work by using Data Mining techniques. This study investigates the usefulness of Decision Trees, Neural Networks and Bayesian Belief Networks in the identification of fraudulent financial statements. The input vector is composed of ratios derived from financial statements. The three models are compared in terms of their performances. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eed4d069544649b2c80634bdacbda372",
"text": "Data mining tools become important in finance and accounting. Their classification and prediction abilities enable them to be used for the purposes of bankruptcy prediction, going concern status and financial distress prediction, management fraud detection, credit risk estimation, and corporate performance prediction. This study aims to provide a state-of-the-art review of the relative literature and to indicate relevant research opportunities.",
"title": ""
},
{
"docid": "c233b0a590d83ad93e5c9ba742825d11",
"text": "Regularization is important for end-to-end speech models, since the models are highly flexible and easy to overfit. Data augmentation and dropout has been important for improving end-to-end models in other domains. However, they are relatively under explored for end-to-end speech models. Therefore, we investigate the effectiveness of both methods for end-to-end trainable, deep speech recognition models. We augment audio data through random perturbations of tempo, pitch, volume, temporal alignment, and adding random noise. We further investigate the effect of dropout when applied to the inputs of all layers of the network. We show that the combination of data augmentation and dropout give a relative performance improvement on both Wall Street Journal (WSJ) and LibriSpeech dataset of over 20%. Our model performance is also competitive with other end-to-end speech models on both datasets.",
"title": ""
},
{
"docid": "37f55e03f4d1ff3b9311e537dc7122b5",
"text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.",
"title": ""
},
{
"docid": "cf3c9769496d51078904495d18198626",
"text": "Five different threshold segmentation based approaches have been reviewed and compared over here to extract the tumor from set of brain images. This research focuses on the analysis of image segmentation methods, a comparison of five semi-automated methods have been undertaken for evaluating their relative performance in the segmentation of tumor. Consequently, results are compared on the basis of quantitative and qualitative analysis of respective methods. The purpose of this study was to analytically identify the methods, most suitable for application for a particular genre of problems. The results show that of the region growing segmentation performed better than rest in most cases.",
"title": ""
},
{
"docid": "599fb363d80fd1a7a6faaccbde3ecbb5",
"text": "In this survey a new application paradigm life and safety for critical operations and missions using wearable Wireless Body Area Networks (WBANs) technology is introduced. This paradigm has a vast scope of applications, including disaster management, worker safety in harsh environments such as roadside and building workers, mobile health monitoring, ambient assisted living and many more. It is often the case that during the critical operations and the target conditions, the existing infrastructure is either absent, damaged or overcrowded. In this context, it is envisioned that WBANs will enable the quick deployment of ad-hoc/on-the-fly communication networks to help save many lives and ensuring people's safety. However, to understand the applications more deeply and their specific characteristics and requirements, this survey presents a comprehensive study on the applications scenarios, their context and specific requirements. It explores details of the key enabling standards, existing state-of-the-art research studies, and projects to understand their limitations before realizing aforementioned applications. Application-specific challenges and issues are discussed comprehensively from various perspectives and future research and development directions are highlighted as an inspiration for new innovative solutions. To conclude, this survey opens up a good opportunity for companies and research centers to investigate old but still new problems, in the realm of wearable technologies, which are increasingly evolving and getting more and more attention recently.",
"title": ""
},
{
"docid": "1f29786ab85cc54e4e5c860cee2d147e",
"text": "Due to its wide applicability and ease of use, the analytic hierarchy process (AHP) has been studied extensively for the last 20 years. Recently, it is observed that the focus has been confined to the applications of the integrated AHPs rather than the stand-alone AHP. The five tools that commonly combined with the AHP include mathematical programming, quality function deployment (QFD), meta-heuristics, SWOT analysis, and data envelopment analysis (DEA). This paper reviews the literature of the applications of the integrated AHPs. Related articles appearing in the international journals from 1997 to 2006 are gathered and analyzed so that the following three questions can be answered: (i) which type of the integrated AHPs was paid most attention to? (ii) which area the integrated AHPs were prevalently applied to? (iii) is there any inadequacy of the approaches? Based on the inadequacy, if any, some improvements and possible future work are recommended. This research not only provides evidence that the integrated AHPs are better than the stand-alone AHP, but also aids the researchers and decision makers in applying the integrated AHPs effectively. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "67a8a8ef9111edd9c1fa88e7c59b6063",
"text": "The process of obtaining intravenous (IV) access, Venipuncture, is an everyday invasive procedure in medical settings and there are more than one billion venipuncture related procedures like blood draws, peripheral catheter insertions, intravenous therapies, etc. performed per year [3]. Excessive venipunctures are both time and resource consuming events causing anxiety, pain and distress in patients, or can lead to severe harmful injuries [8]. The major problem faced by the doctors today is difficulty in accessing veins for intra-venous drug delivery & other medical situations [3]. There is a need to develop vein detection devices which can clearly show veins. This project deals with the design development of non-invasive subcutaneous vein detection system and is implemented based on near infrared imaging and interfaced to a laptop to make it portable. A customized CCD camera is used for capturing the vein images and Computer Software modules (MATLAB & LabVIEW) is used for the processing [3].",
"title": ""
},
{
"docid": "acfe73c1e02fe1bd1b6cbee0674eefd6",
"text": "EDWIN SEVER BECHIR1, MARIANA PACURAR1, TUDOR ALEXANDRU HANTOIU1, ANAMARIA BECHIR2*, OANA SMATREA2, ALEXANDRU BURCEA2, CHERANA GIOGA2, MONICA MONEA1 1 Medicine and Pharmacy University of Tirgu-Mures, Faculty of Dentistry, 38 Gheorghe Marinescu Str., 540142,Tirgu-Mures, Romania 2 Titu Maiorescu University of Bucharest, Faculty of Dentistry, Department of Dental Specialties, 67A Gheorghe Petrascu Str., 031593, Bucharest, Romania",
"title": ""
},
{
"docid": "fa8d2547c3f2524596e97681b846b0e6",
"text": "Native Language Identification (NLI) is a task aimed at determining the native language (L1) of learners of second language (L2) on the basis of their written texts. To date, research on NLI has focused on relatively small corpora. We apply NLI to the recently released EFCamDat corpus which is not only multiple times larger than previous L2 corpora but also provides longitudinal data at several proficiency levels. Our investigation using accurate machine learning with a wide range of linguistic features reveals interesting patterns in the longitudinal data which are useful for both further development of NLI and its application to research on L2 acquisition.",
"title": ""
},
{
"docid": "649b1f289395aa6251fe9f3288209b67",
"text": "Besides game-based learning, gamification is an upcoming trend in education, studied in various empirical studies and found in many major learning management systems. Employing a newly developed qualitative instrument for assessing gamification in a system, we studied five popular LMS for their specific implementations. The instrument enabled experts to extract affordances for gamification in the five categories experiential, mechanics, rewards, goals, and social. Results show large similarities in all of the systems studied and few varieties in approaches to gamification.",
"title": ""
},
{
"docid": "2a9e4ed54dd91eb8a6bad757afc9ac75",
"text": "The modern advancements in digital electronics allow waveforms to be easily synthesized and captured using only digital electronics. The synthesis of radar waveforms using only digital electronics, such as Digital-to-Analog Converters (DACs) and Analog-to-Digital Converters (ADCs) allows for a majority of the analog chain to be removed from the system. In order to create a constant amplitude waveform, the amplitude distortions must be compensated for. The method chosen to compensate for the amplitude distortions is to pre-distort the waveform so, when it is influenced by the system, the output waveform has a near constant amplitude modulus. The effects of the predistortion were observed to be successful in both range and range-Doppler radar implementations.",
"title": ""
},
{
"docid": "17f171d0d91c1d914600a238f6446650",
"text": "One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design philosophy, which allows us to design the ARMA coefficients independently from the underlying graph, renders the ARMA graph filters suitable in static and, particularly, time-varying settings. The latter occur when the graph signal and/or graph are changing over time. We show that in case of a time-varying graph signal our approach extends naturally to a two-dimensional filter, operating concurrently in the graph and regular time domains. We also derive sufficient conditions for filter stability when the graph and signal are time-varying. The analytical and numerical results presented in this paper illustrate that ARMA graph filters are practically appealing for static and time-varying settings, accompanied by strong theoretical guarantees. Keywords— distributed graph filtering, signal processing on graphs, time-varying graph signals, time-varying graphs",
"title": ""
},
{
"docid": "475ad4a81e5cb6bcffbeb77cae320c44",
"text": "Medical image processing is a very active and fast-growing field that has evolved into an established discipline. Accurate segmentation of medical images is a fundamental step in clinical studies for diagnosis, monitoring, and treatment planning. Manual segmentation of medical images is a time consuming and a tedious task. Therefore the automated segmentation algorithms with high accuracy are of interest. There are several critical factors that determine the performance of a segmentation algorithm. Examples are: the area of application of segmentation technique, reproducibility of the method, accuracy of the results, etc. The purpose of this review is to provide an overview of current image segmentation methods. Their relative efficiency, advantages, and the problems they encounter are discussed. In order to evaluate the segmentation results, some popular benchmark measurements are presented.",
"title": ""
}
] |
scidocsrr
|
ec7641be7993ee010051df7c0782e159
|
Privacy preserving multi-factor authentication with biometrics
|
[
{
"docid": "72385aba9bdf5f8d35985fc8ff98a5ff",
"text": "Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as ecommerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.",
"title": ""
},
{
"docid": "9bbf2a9f5afeaaa0f6ca12e86aef8e88",
"text": "Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a \"skin\" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.",
"title": ""
}
] |
[
{
"docid": "eca2d0509966e77c8a8445cdb297e7d3",
"text": "Interpretation of regression coefficients is sensitive to the scale of the inputs. One method often used to place input variables on a common scale is to divide each numeric variable by its standard deviation. Here we propose dividing each numeric variable by two times its standard deviation, so that the generic comparison is with inputs equal to the mean +/-1 standard deviation. The resulting coefficients are then directly comparable for untransformed binary predictors. We have implemented the procedure as a function in R. We illustrate the method with two simple analyses that are typical of applied modeling: a linear regression of data from the National Election Study and a multilevel logistic regression of data on the prevalence of rodents in New York City apartments. We recommend our rescaling as a default option--an improvement upon the usual approach of including variables in whatever way they are coded in the data file--so that the magnitudes of coefficients can be directly compared as a matter of routine statistical practice.",
"title": ""
},
{
"docid": "c09e479d4adb8861884be6a83561b16d",
"text": "Open-domain social dialogue is one of the long-standing goals of Artificial Intelligence. This year, the Amazon Alexa Prize challenge was announced for the first time, where real customers get to rate systems developed by leading universities worldwide. The aim of the challenge is to converse “coherently and engagingly with humans on popular topics for 20 minutes”. We describe our Alexa Prize system (called ‘Alana’) consisting of an ensemble of bots, combining rule-based and machine learning systems, and using a contextual ranking mechanism to choose a system response. The ranker was trained on real user feedback received during the competition, where we address the problem of how to train on the noisy and sparse feedback obtained during the competition.",
"title": ""
},
{
"docid": "a3e6d006a56913285d1eb6f0a8e1ce55",
"text": "This paper updates and builds on ‘Modelling with Stakeholders’ Voinov and Bousquet, 2010 which demonstrated the importance of, and demand for, stakeholder participation in resource and environmental modelling. This position paper returns to the concepts of that publication and reviews the progress made since 2010. A new development is the wide introduction and acceptance of social media and web applications, which dramatically changes the context and scale of stakeholder interactions and participation. Technology advances make it easier to incorporate information in interactive formats via visualization and games to augment participatory experiences. Citizens as stakeholders are increasingly demanding to be engaged in planning decisions that affect them and their communities, at scales from local to global. How people interact with and access models and data is rapidly evolving. In turn, this requires changes in how models are built, packaged, and disseminated: citizens are less in awe of experts and external authorities, and they are increasingly aware of their own capabilities to provide inputs to planning processes, including models. The continued acceleration of environmental degradation and natural resource depletion accompanies these societal changes, even as there is a growing acceptance of the need to transition to alternative, possibly very different, life styles. Substantive transitions cannot occur without significant changes in human behaviour and perceptions. The important and diverse roles that models can play in guiding human behaviour, and in disseminating and increasing societal knowledge, are a feature of stakeholder processes today. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dcbc7fdd21570336e53c12037839a9c1",
"text": "The objective of this industry study was to shed light on the current situation and improvement needs in software test automation. To this end, 55 industry specialists from 31 organizational units were interviewed. In parallel with the survey, a qualitative study was conducted in 12 selected software development organizations. The results indicated that the software testing processes usually follow systematic methods to a large degree, and have only little immediate or critical requirements for resources. Based on the results, the testing processes have approximately three fourths of the resources they need, and have access to a limited, but usually sufficient group of testing tools. As for the test automation, the situation is not as straightforward: based on our study, the applicability of test automation is still limited and its adaptation to testing contains practical difficulties in usability. In this study, we analyze and discuss these limitations and difficulties.",
"title": ""
},
{
"docid": "dd4750b43931b3b09a5e95eaa74455d1",
"text": "In viticulture, there are several applications where bud detection in vineyard images is a necessary task, susceptible of being automated through the use of computer vision methods. A common and effective family of visual detection algorithms are the scanning-window type, that slide a (usually) fixed size window along the original image, classifying each resulting windowed-patch as containing or not containing the target object. The simplicity of these algorithms finds its most challenging aspect in the classification stage. Interested in grapevine buds detection in natural field conditions, this paper presents a classification method for images of grapevine buds ranging 100 to 1600 pixels in diameter, captured in outdoor, under natural field conditions, in winter (i.e., no grape bunches, very few leaves, and dormant buds), without artificial background, and with minimum equipment requirements. The proposed method uses well-known computer vision technologies: Scale-Invariant Feature Transform for calculating low-level features, Bag of Features for building an image descriptor, and Support Vector Machines for training a classifier. When evaluated over images containing buds of at least 100 pixels in diameter, the approach achieves a recall higher than 0.9 and a precision of 0.86 over all windowed-patches covering the whole bud and down to 60% of it, and scaled up to window patches containing a proportion of 20%-80% of bud versus background pixels. This robustness on the position and size of the window demonstrates its viability for use as the classification stage in a scanning-window detection algorithms.",
"title": ""
},
{
"docid": "4b7ffae0dfa7e43b5456ec08fbd0824e",
"text": "METHODS\nIn this study of patients who underwent internal fixation without fusion for a burst thoracolumbar or lumbar fracture, we compared the serial changes in the injured disc height (DH), and the fractured vertebral body height (VBH) and kyphotic angle between patients in whom the implants were removed and those in whom they were not. Radiological parameters such as injured DH, fractured VBH and kyphotic angle were measured. Functional outcomes were evaluated using the Greenough low back outcome scale and a VAS scale for pain.\n\n\nRESULTS\nBetween June 1996 and May 2012, 69 patients were analysed retrospectively; 47 were included in the implant removal group and 22 in the implant retention group. After a mean follow-up of 66 months (48 to 107), eight patients (36.3%) in the implant retention group had screw breakage. There was no screw breakage in the implant removal group. All radiological and functional outcomes were similar between these two groups. Although solid union of the fractured vertebrae was achieved, the kyphotic angle and the anterior third of the injured DH changed significantly with time (p < 0.05).\n\n\nDISCUSSION\nThe radiological and functional outcomes of both implant removal and retention were similar. Although screw breakage may occur, the implants may not need to be removed.\n\n\nTAKE HOME MESSAGE\nImplant removal may not be needed for patients with burst fractures of the thoracolumbar and lumbar spine after fixation without fusion. However, information should be provided beforehand regarding the possibility of screw breakage.",
"title": ""
},
{
"docid": "8b752b8607b6296b35d34bb59830e8e4",
"text": "The innate immune system is the first line of defense against infection and responses are initiated by pattern recognition receptors (PRRs) that detect pathogen-associated molecular patterns (PAMPs). PRRs also detect endogenous danger-associated molecular patterns (DAMPs) that are released by damaged or dying cells. The major PRRs include the Toll-like receptor (TLR) family members, the nucleotide binding and oligomerization domain, leucine-rich repeat containing (NLR) family, the PYHIN (ALR) family, the RIG-1-like receptors (RLRs), C-type lectin receptors (CLRs) and the oligoadenylate synthase (OAS)-like receptors and the related protein cyclic GMP-AMP synthase (cGAS). The different PRRs activate specific signaling pathways to collectively elicit responses including the induction of cytokine expression, processing of pro-inflammatory cytokines and cell-death responses. These responses control a pathogenic infection, initiate tissue repair and stimulate the adaptive immune system. A central theme of many innate immune signaling pathways is the clustering of activated PRRs followed by sequential recruitment and oligomerization of adaptors and downstream effector enzymes, to form higher-order arrangements that amplify the response and provide a scaffold for proximity-induced activation of the effector enzymes. Underlying the formation of these complexes are co-operative assembly mechanisms, whereby association of preceding components increases the affinity for downstream components. This ensures a rapid immune response to a low-level stimulus. Structural and biochemical studies have given key insights into the assembly of these complexes. Here we review the current understanding of assembly of immune signaling complexes, including inflammasomes initiated by NLR and PYHIN receptors, the myddosomes initiated by TLRs, and the MAVS CARD filament initiated by RIG-1. We highlight the co-operative assembly mechanisms during assembly of each of these complexes.",
"title": ""
},
{
"docid": "1eba4ab4cb228a476987a5d1b32dda6c",
"text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.",
"title": ""
},
{
"docid": "097441efa777112a6b5159c0edf00191",
"text": "A comprehensive review of printed circuit board (PCB) electromagnetic compatibility (EMC) issues, analysis techniques, and possible solutions would fill a large book or more. This review takes a quick look at where the technology of PCB EMC control has been, where it is today, and where it needs to go for the future. As data rates on PCBs have increased, new problems have arisen, requiring new analysis techniques and new solutions. Further development will be needed to keep up with the ever-increasing data rates and smaller form factors.",
"title": ""
},
{
"docid": "57332cd7472707617e864d196ff454ef",
"text": "Vehicle platoons are fully automated vehicles driving in close proximity of each other, where both distance keeping and steering is under automatic control. This paper is aiming at a variant of vehicle platoons, where the lateral control is using forward looking sensors, i.e. camera, radar. Such a system solution implies that the vehicle dynamics are coupled together laterally, in contrast to the classical look-down solutions. For such a platoon, lateral string stability is an important property that the controller needs to guarantee. This article proposes a method for designing such a distributed controller. It also examines the effect of model uncertainties on the lateral string stability of the platoon for the proposed method.",
"title": ""
},
{
"docid": "89d8c092f415b65b75d275fa727e14c4",
"text": "Facial expression editing is a challenging task as it needs a high-level semantic understanding of the input face image. In conventional methods, either paired training data is required or the synthetic face’s resolution is low. Moreover, only the categories of facial expression can be changed. To address these limitations, we propose an Expression Generative Adversarial Network (ExprGAN) for photo-realistic facial expression editing with controllable expression intensity. An expression controller module is specially designed to learn an expressive and compact expression code in addition to the encoder-decoder network. This novel architecture enables the expression intensity to be continuously adjusted from low to high. We further show that our ExprGAN can be applied for other tasks, such as expression transfer, image retrieval, and data augmentation for training improved face expression recognition models. To tackle the small size of the training database, an effective incremental learning scheme is proposed. Quantitative and qualitative evaluations on the widely used Oulu-CASIA dataset demonstrate the effectiveness of ExprGAN.",
"title": ""
},
{
"docid": "e4e372287a5d53bd3926705e01b43235",
"text": "The regular gathering of student information has created a high level of complexity, and also an incredible opportunity for teachers to enhance student learning experience. The digital information that learners leave online about their interests, engagement and their preferences gives significant measures of information that can be mined to customise their learning experience better. The motivation behind this article is to inspect the quickly developing field of Learning Analytics and to study why and how enormous information will benefit teachers, institutes, online course developers and students as a whole. The research will discuss the advancement in Big Data and how is it useful in education, along with an overview of the importance of various stakeholders and the challenges that lie ahead. We also look into the tools and techniques that are put into practice to realize the benefits of Analytics in Education. Our results suggest that this field has the immense scope of development but ethical and privacy issues present a challenge.",
"title": ""
},
{
"docid": "1eb43d21aa090151aef2ba722b6fc704",
"text": "This study was carried out to investigate pre-service teachers’ perceived ease of use, perceived usefulness, attitude and intentions towards the utilization of virtual laboratory package in teaching and learning of Nigerian secondary school physics concepts. Descriptive survey research was employed and 66 fourth and fifth year Physics education students were purposively used as research sample. Four research questions guided the study and a 16-item questionnaire was used as instrument for data collection. The questionnaire was validated by educational technology experts, physics expert and guidance and counselling experts. Pilot study was carried out on year three physics education students and a reliability coefficients ranging from 0.76 to 0.89 was obtained for each of the four sections of the questionnaire. Data collected from the administration of the research instruments were analyzed using descriptive statistics of Mean and Standard Deviation. A decision rule was set, in which, a mean score of 2.50 and above was considered Agreed while a mean score below 2.50 was considered Disagreed. Findings revealed that pre-service physics teachers perceived the virtual laboratory package easy to use and useful with mean scores of 3.18 and 3.34 respectively. Also, respondents’ attitude and intentions to use the package in teaching and learning of physics were positive with mean scores of 3.21 and 3.37 respectively. Based on these findings, it was recommended among others that administrators should equip schools with adequate Information and Communication Technology facilities that would aid students and teachers’ utilization of virtual-based learning environments in teaching and learning process.",
"title": ""
},
{
"docid": "4feab0c5f92502011ed17a425b0f800b",
"text": "This paper gives an insight of how we can store healthcare data digitally like patient's records as an Electronic Health Record (EHR) and how we can generate useful information from these records by using analytics techniques and tools which will help in saving time and money of patients as well as the doctors. This paper is fully focused towards the Maharaja Yeshwantrao Hospital (M.Y.) located in Indore, Madhya Pradesh, India. M.Y hospital is the central India's largest government hospital. It generates large amount of heterogeneous data from different sources like patients health records, laboratory test result, electronic medical equipment, health insurance data, social media, drug research, genome research, clinical outcome, transaction and from Mahatma Gandhi Memorial medical college which is under MY hospital. To manage this data, data analytics may be used to make it useful for retrieval. Hence the concept of \"big data\" can be applied. Big data is characterized as extremely large data sets that can be analysed computationally to find patterns, trends, and associations, visualization, querying, information privacy and predictive analytics on large wide spread collection of data. Big data analytics can be done using Hadoop which plays an effective role in performing meaningful real-time analysis on the large volume of this data to predict the emergency situations before it happens. This paper also discusses about the EHR and the big data usage and its analytics at M.Y. hospital.",
"title": ""
},
{
"docid": "644936acfe1f9ffa0b5f3e8751015d86",
"text": "The use of electromagnetic induction lamps without electrodes has increased because of their long life and energy efficiency. The control of the ignition and luminosity of the lamp is provided by an electronic ballast. Beyond that, the electronic ballast also provides a power factor correction, allowing the minimizing of the lamps impact on the quality of service of the electrical network. The electronic ballast includes several blocks, namely a bridge rectifier, a power factor correcting circuit (PFC), an asymmetric half-bridge inverter with a resonant filter on the inverter output, and a circuit to control the conduction time ot the ballast transistors. Index Terms – SEPIC, PFC, electrodeless lamp, ressonant filter,",
"title": ""
},
{
"docid": "68cb8836a07846d19118d21383f6361a",
"text": "Background: Dental rehabilitation of partially or totally edentulous patients with oral implants has become a routine treatment modality in the last decades, with reliable long-term results. However, unfavorable local conditions of the alveolar ridge, due to atrophy, periodontal disease, and trauma sequelae may provide insufficient bone volume or unfavorable vertical, horizontal, and sagittal intermaxillary relationships, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint. The aim of the current review is to discuss the different strategies for reconstruction of the alveolar ridge defect for implant placement. Study design: The study design includes a literature review of the articles that address the association between Reconstruction of Mandibular Alveolar Ridge Defects and Implant Placement. Results: Yet, despite an increasing number of publications related to the correction of deficient alveolar ridges, much controversy still exists concerning which is the more suitable and reliable technique. This is often because the publications are of insufficient methodological quality (inadequate sample size, lack of well-defined exclusion and inclusion criteria, insufficient follow-up, lack of well-defined success criteria, etc.). Conclusion: On the basis of available data it is difficult to conclude that a particular surgical procedure offered better outcome as compared to another. Hence the practical use of the available bone augmentation procedures for dental implants depends on the clinician’s preference in general and the clinical findings in the patient in particular. Surgical techniques that reduce trauma, preserve and augment the alveolar ridge represent key areas in the goal to optimize implant results.",
"title": ""
},
{
"docid": "0cd9577750b6195c584e55aac28cc2ba",
"text": "The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, incentives are becoming as important to dependability as technical design. The new field provides valuable insights not just into ‘security’ topics such as privacy, bugs, spam, and phishing, but into more general areas such as system dependability (the design of peer-to-peer systems and the optimal balance of effort by programmers and testers), and policy (particularly digital rights management). This research program has been starting to spill over into more general security questions (such as law-enforcement strategy), and into the interface between security and sociology. Most recently it has started to interact with psychology, both through the psychology-and-economics tradition and in response to phishing. The promise of this research program is a novel framework for analyzing information security problems – one that is both principled and effective.",
"title": ""
},
{
"docid": "489c19077caa00680764b1c352e2146b",
"text": "In this paper, we describe a system that reacts to both possible system breakdowns and low user engagement with a set of conversational strategies. These general strategies reduce the number of inappropriate responses and produce better user engagement. We also found that a system that reacts to both possible system breakdowns and low user engagement is rated by both experts and non-experts as having better overall user engagement compared to a system that only reacts to possible system breakdowns. We argue that for non-task-oriented systems we should optimize on both system response appropriateness and user engagement. We also found that apart from making the system response appropriate, funny and provocative responses can also lead to better user engagement. On the other hand, short appropriate responses, such as “Yes” or “No” can lead to decreased user engagement. We will use these findings to further improve our system.",
"title": ""
}
] |
scidocsrr
|
ab2b31c30a8bd070e7dbc69956473943
|
Forming impressions of people versus inanimate objects: Social-cognitive processing in the medial prefrontal cortex
|
[
{
"docid": "f6ba57b277beb545ad9b396404cd56b9",
"text": "The orbitofrontal cortex contains the secondary taste cortex, in which the reward value of taste is represented. It also contains the secondary and tertiary olfactory cortical areas, in which information about the identity and also about the reward value of odours is represented. The orbitofrontal cortex also receives information about the sight of objects from the temporal lobe cortical visual areas, and neurons in it learn and reverse the visual stimulus to which they respond when the association of the visual stimulus with a primary reinforcing stimulus (such as taste) is reversed. This is an example of stimulus-reinforcement association learning, and is a type of stimulus-stimulus association learning. More generally, the stimulus might be a visual or olfactory stimulus, and the primary (unlearned) positive or negative reinforcer a taste or touch. A somatosensory input is revealed by neurons that respond to the texture of food in the mouth, including a population that responds to the mouth feel of fat. In complementary neuroimaging studies in humans, it is being found that areas of the orbitofrontal cortex are activated by pleasant touch, by painful touch, by taste, by smell, and by more abstract reinforcers such as winning or losing money. Damage to the orbitofrontal cortex can impair the learning and reversal of stimulus-reinforcement associations, and thus the correction of behavioural responses when there are no longer appropriate because previous reinforcement contingencies change. The information which reaches the orbitofrontal cortex for these functions includes information about faces, and damage to the orbitofrontal cortex can impair face (and voice) expression identification. This evidence thus shows that the orbitofrontal cortex is involved in decoding and representing some primary reinforcers such as taste and touch; in learning and reversing associations of visual and other stimuli to these primary reinforcers; and in controlling and correcting reward-related and punishment-related behavior, and thus in emotion. The approach described here is aimed at providing a fundamental understanding of how the orbitofrontal cortex actually functions, and thus in how it is involved in motivational behavior such as feeding and drinking, in emotional behavior, and in social behavior.",
"title": ""
},
{
"docid": "4b284736c51435f9ab6f52f174dc7def",
"text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.",
"title": ""
}
] |
[
{
"docid": "601ab07a9169073032e713b0f5251c1b",
"text": "We discuss fast exponential time solutions for NP-complete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NP-complete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more.",
"title": ""
},
{
"docid": "f00b9a311fb8b14100465c187c9e4659",
"text": "We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.",
"title": ""
},
{
"docid": "aeba4012971d339a9a953a7b86f57eb8",
"text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"title": ""
},
{
"docid": "9cddaea30d7dda82537c273e97bff008",
"text": "A low-offset latched comparator using new dynamic offset cancellation technique is proposed. The new technique achieves low offset voltage without pre-amplifier and quiescent current. Furthermore the overdrive voltage of the input transistor can be optimized to reduce the offset voltage of the comparator independent of the input common mode voltage. A prototype comparator has been fabricated in 90 nm 9M1P CMOS technology with 152 µm2. Experimental results show that the comparator achieves 3.8 mV offset at 1 sigma at 500 MHz operating, while dissipating 39 μW from a 1.2 V supply.",
"title": ""
},
{
"docid": "338dc5d14a5c00a110823dd3ce7c2867",
"text": "Le diagnostic de l'hallux valgus est clinique. Le bilan radiographique n'intervient qu'en seconde intention pour préciser les vices architecturaux primaires ou secondaires responsables des désaxations ostéo-musculotendineuses. Ce bilan sera toujours réalisé dans des conditions physiologiques, c'est-à-dire le pied en charge. La radiographie de face en charge apprécie la formule du pied (égyptien, grec, carré), le degré de luxation des sésamoïdes (stades 1, 2 ou 3), les valeurs angulaires (ouverture du pied, varus intermétatarsien, valgus interphalangien) et linéaires, tel l'étalement de l'avant-pied. La radiographie de profil en charge évalue la formule d'un pied creux, plat ou normo axé. L'incidence de Guntz Walter reflétant l'appui métatarsien décèle les zones d'hyperappui pathologique. En post-opératoire, ce même bilan permettra d'évaluer le geste chirurgical et de reconnaître une éventuelle hyper ou hypocorrection. The diagnosis of hallux valgus is a clinical one. Radiographic examination is involved only secondarily, to define the primary or secondary structural defects responsible for bony and musculotendinous malalignement. This examination should always be made under physiologic conditions, i.e., with the foot taking weight. The frontal radiograph in weight-bearing assesses the category of the foot (Egyptian, Greek, square), the degree of luxation of the sesamoids (stages 1, 2 or 3), the angular values (opening of the foot, intermetatarsal varus, interphalangeal valgus) and the linear values such as the spreading of the forefoot. The lateral radiograph in weight-bearing categorises the foot as cavus, flat or normally oriented. The Guntz Walter view indicates the thrust on the metatarsals and reveals zones of abnormal excessive thrust. Postoperatively, the same examination makes it possible to assess the outcome of the surgical procedure and to detect any over- or under-correction.",
"title": ""
},
{
"docid": "b845aaa999c1ed9d99cb9e75dff11429",
"text": "We present a new space-efficient approach, (SparseDTW ), to compute the Dynamic Time Warping (DTW ) distance between two time series that always yields the optimal result. This is in contrast to other known approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series. The more the similarity between the time series the less space required to compute the DTW between them. To the best of our knowledge, all other techniques to speedup DTW, impose apriori constraints and do not exploit similarity characteristics that may be present in the data. We conduct experiments and demonstrate that SparseDTW outperforms previous approaches.",
"title": ""
},
{
"docid": "b318cfcbe82314cc7fa898f0816dbab8",
"text": "Flow experience is often considered as an important standard of ideal user experience (UX). Till now, flow is mainly measured via self-report questionnaires, which cannot evaluate flow immediately and objectively. In this paper, we constructed a physiological evaluation model to evaluate flow in virtual reality (VR) game. The evaluation model consists of five first-level indicators and their respective second-level indicators. Then, we conducted an empirical experiment to test the effectiveness of partial indicators to predict flow experience. Most results supported the model and revealed that heart rate, interbeat interval, heart rate variability (HRV), low-frequency HRV (LF-HRV), high-frequency HRV (HF-HRV), and respiratory rate are all effective indicators in predicting flow experience. Further research should be conducted to improve the evaluation model and conclude practical implications in UX and VR game design.",
"title": ""
},
{
"docid": "d710b31d51cd7c737505de9bbe2a31ad",
"text": "GAIL is a recent successful imitation learning architecture that exploits the adversarial training procedure introduced in GANs. Albeit successful at generating behaviours similar to those demonstrated to the agent, GAIL suffers from a high sample complexity in the number of interactions it has to carry out in the environment in order to achieve satisfactory performance. We dramatically shrink the amount of interactions with the environment necessary to learn well-behaved imitation policies, by up to several orders of magnitude. Our framework, operating in the model-free regime, exhibits a significant increase in sample-efficiency over previous methods by simultaneously a) learning a self-tuned adversarially-trained surrogate reward and b) leveraging an off-policy actor-critic architecture. We show that our approach is simple to implement and that the learned agents remain remarkably stable, as shown in our experiments that span a variety of continuous control tasks. Video visualisation available at: https://streamable.com/42l01",
"title": ""
},
{
"docid": "7e264804d56cab24454c59fe73b51884",
"text": "General Douglas MacArthur remarked that \"old soldiers never die; they just fade away.\" For decades, researchers have concluded that visual working memories, like old soldiers, fade away gradually, becoming progressively less precise as they are retained for longer periods of time. However, these conclusions were based on threshold-estimation procedures in which the complete termination of a memory could artifactually produce the appearance of lower precision. Here, we use a recall-based visual working memory paradigm that provides separate measures of the probability that a memory is available and the precision of the memory when it is available. Using this paradigm, we demonstrate that visual working memory representations may be retained for several seconds with little or no loss of precision, but that they may terminate suddenly and completely during this period.",
"title": ""
},
{
"docid": "ea05a43abee762d4b484b5027e02a03a",
"text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.",
"title": ""
},
{
"docid": "b721cdddce57146f540fe12d957f47cc",
"text": "The effects of social influence and homophily suggest that both network structure and node attribute information should inform the tasks of link prediction and node attribute inference. Recently, Yin et al. [28, 29] proposed Social-Attribute Network (SAN), an attribute-augmented social network, to integrate network structure and node attributes to perform both link prediction and attribute inference. They focused on generalizing the random walk with restart algorithm to the SAN framework and showed improved performance. In this paper, we extend the SAN framework with several leading supervised and unsupervised link prediction algorithms and demonstrate performance improvement for each algorithm on both link prediction and attribute inference. Moreover, we make the novel observation that attribute inference can help inform link prediction, i.e., link prediction accuracy is further improved by first inferring missing attributes. We comprehensively evaluate these algorithms and compare them with other existing algorithms using a novel, largescale Google+ dataset, which we make publicly available.",
"title": ""
},
{
"docid": "1350f4e274947881f4562ab6596da6fd",
"text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.",
"title": ""
},
{
"docid": "a247fd8f9f619cada34bc9150b4881a9",
"text": "Within cognitive science and psychology, there has been a good deal of interest recently in the topic of creativity. In this book, Scott Turner of the University of California, Los Angeles, presents a theory of creativity applied to generating small stories. Turner can be thought of as a member of the third generation of the Schank family: first, of course, there was grandfather Roger Schank, who in the 1970s with Robert Abelson at Yale, embarked on the research project of understanding narrative text using the ideas of goals, plans, and scripts. The attempt was to propose computational models that would accomplish aspects of narrative understanding. The second generation was a talented group of people, including Wendy Lehnert and Robert Wilensky, who did their Ph.D.s at Yale on story understanding. Turner is a member of a third generation, advised by Michael Dyer who also obtained his Ph.D. at Yale and then moved to an academic position at UCLA. Dyer had turned his attention to story generation as well as story understanding. In the classic mode, Turner took on rather too much for his Ph.D. He wrote a large artificial intelligence program, 17,000 lines of Lisp code, that produces a reasonable output that could, with the suspension of a certain amount of disbelief, pass for the production of a human author. He calls his program \"Minstrel.\" It generates stories of half a page or so about knights and ladies at the court of King Arthur. The program was the core of his Ph.D. thesis, and this is the book of the program. Here is a sample from one of Minstrel's stories (p. 72):",
"title": ""
},
{
"docid": "4a8b622eef99f13b8c4f023824688153",
"text": "Internet memes are increasingly used to sway and manipulate public opinion. This prompts the need to study their propagation, evolution, and influence across the Web. In this paper, we detect and measure the propagation of memes across multiple Web communities, using a processing pipeline based on perceptual hashing and clustering techniques, and a dataset of 160M images from 2.6B posts gathered from Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, over the course of 13 months. We group the images posted on fringe Web communities (/pol/, Gab, and The_Donald subreddit) into clusters, annotate them using meme metadata obtained from Know Your Meme, and also map images from mainstream communities (Twitter and Reddit) to the clusters.\n Our analysis provides an assessment of the popularity and diversity of memes in the context of each community, showing, e.g., that racist memes are extremely common in fringe Web communities. We also find a substantial number of politics-related memes on both mainstream and fringe Web communities, supporting media reports that memes might be used to enhance or harm politicians. Finally, we use Hawkes processes to model the interplay between Web communities and quantify their reciprocal influence, finding that /pol/ substantially influences the meme ecosystem with the number of memes it produces, while The_Donald has a higher success rate in pushing them to other communities.",
"title": ""
},
{
"docid": "b4533cd83713a94f00239857c0ff29a5",
"text": "Nowadays, IT community is experiencing great shift in computing and information storage infrastructures by using powerful, flexible and reliable alternative of cloud computing. The power of cloud computing may also be realized for mankind if some dedicated disaster management clouds will be developed at various countries cooperating each other on some common standards. The experimentation and deployment of cloud computing by governments of various countries for mankind may be the justified use of IT at social level. It is possible to realize a real-time disaster management cloud where applications in cloud will respond within a specified time frame. If a Real-Time Cloud (RTC) is available then for intelligent machines like robots the complex processing may be done on RTC via request and response model. The complex processing is more desirable as level of intelligence increases in robots towards humans even more. Therefore, it may be possible to manage disaster sites more efficiently with more intelligent cloud robots without great lose of human lives waiting for various assistance at disaster site. Real-time garbage collector, real-time specification for Java, multicore CPU architecture with network-on-chip, parallel algorithms, distributed algorithms, high performance database systems, high performance web servers and gigabit networking can be used to develop real-time applications in cloud.",
"title": ""
},
{
"docid": "fcfc16b94f06bf6120431a348e97b9ac",
"text": "Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification.",
"title": ""
},
{
"docid": "a0f46c67118b2efec2bce2ecd96d11d6",
"text": "This paper describes the implementation of a service to identify and geo-locate real world events that may be present as social activity signals in two different social networks. Specifically, we focus on content shared by users on Twitter and Instagram in order to design a system capable of fusing data across multiple networks. Past work has demonstrated that it is indeed possible to detect physical events using various social network platforms. However, many of these signals need corroboration in order to handle events that lack proper support within a single network. We leverage this insight to design an unsupervised approach that can correlate event signals across multiple social networks. Our algorithm can detect events and identify the location of the event occurrence. We evaluate our algorithm using both simulations and real world datasets collected using Twitter and Instagram. The results indicate that our algorithm significantly improves false positive elimination and attains high precision compared to baseline methods on real world datasets.",
"title": ""
},
{
"docid": "97b7d2dbb06b083b807a4f153e8a711e",
"text": "Hyperspectral imaging sensors are becoming increasingly popular in robotics applications such as agriculture and mining, and allow per-pixel thematic classification of materials in a scene based on their unique spectral signatures. Recently, convolutional neural networks have shown remarkable performance for classification tasks, but require substantial amounts of labelled training data. This data must sufficiently cover the variability expected to be encountered in the environment. For hyperspectral data, one of the main variations encountered outdoors is due to incident illumination, which can change in spectral shape and intensity depending on the scene geometry. For example, regions occluded from the sun have a lower intensity and their incident irradiance skewed towards shorter wavelengths. In this work, a data augmentation strategy based on relighting is used during training of a hyperspectral convolutional neural network. It allows training to occur in the outdoor environment given only a small labelled region, which does not need to sufficiently represent the geometric variability of the entire scene. This is important for applications where obtaining large amounts of training data is labourious, hazardous or difficult, such as labelling pixels within shadows. Radiometric normalisation approaches for preprocessing the hyperspectral data are analysed and it is shown that methods based on the raw pixel data are sufficient to be used as input for the classifier. This removes the need for external hardware such as calibration boards, which can restrict the application of hyperspectral sensors in robotics applications. Experiments to evaluate the classification system are carried out on two datasets captured from a field-based platform.",
"title": ""
},
{
"docid": "2f7b1f2422526d99e75dce7d38665774",
"text": "Conventional Open Information Extraction (Open IE) systems are usually built on hand-crafted patterns from other NLP tools such as syntactic parsing, yet they face problems of error propagation. In this paper, we propose a neural Open IE approach with an encoder-decoder framework. Distinct from existing methods, the neural Open IE approach learns highly confident arguments and relation tuples bootstrapped from a state-of-the-art Open IE system. An empirical study on a large benchmark dataset shows that the neural Open IE system significantly outperforms several baselines, while maintaining comparable computational efficiency.",
"title": ""
},
{
"docid": "2710d644a45697cdd3abd1286218d060",
"text": "Significant ongoing debate exists amongst stakeholders as to the best front-of-pack labelling approach and emerging evidence suggests that the plethora of schemes may cause confusion for the consumer. To gain a better understanding of the relevant psychological phenomena and consumer perspectives surrounding FoP labelling schemes and their optimal development a Multiple Sort Procedure study involving free sorting of a range of nutritional labels presented on cards was performed in four countries (n=60). The underlying structure of the qualitative data generated was explored using Multiple Scalogram Analysis. Elicitation of categorisations from consumers has the potential to provide a very important perspective in this arena and results demonstrated that the amount of information contained within a nutrition label has high salience for consumers, as does the health utility of the label although a dichotomy exists in the affective evaluation of the labels containing varying degrees of information aggregation. Classification of exiting front-of-pack labelling systems on a proposed dimension of 'directiveness' leads to a better understanding of why some schemes may be more effective than others in particular situations or for particular consumers. Based on this research an enhanced hypothetical front-of-pack labelling scheme which combines both directive and non-directive elements is proposed.",
"title": ""
}
] |
scidocsrr
|
633cc934f0a34a19103267f96c4b36bc
|
A world's first product of three-dimensional vertical NAND Flash memory and beyond
|
[
{
"docid": "eb2a89c9308283f871df3d52d1bdc340",
"text": "Vertical NAND flash memory cell array by TCAT (terabit cell array transistor) technology is proposed. Damascened metal gate SONOS type cell in the vertical NAND flash string is realized by a unique dasiagate replacementpsila process. Also, conventional bulk erase operation of the cell is successfully demonstrated. All advantages of TCAT flash is achieved without any sacrifice of bit cost scalability.",
"title": ""
}
] |
[
{
"docid": "094efdf15bd0c6d58a31fba700d922ee",
"text": "The main objectives of any good staging system - essential to an evidence-based approach to cancer - are: to aid the clinician in planning treatment; to provide indication of prognosis; to assist the physician in evaluating the results of treatment; to facilitate the exchange of information between treatment centers, thus disseminating knowledge; and to contribute to continuing investigations into human malignancies. A good staging system must have 3 basic characteristics: it must be valid, reliable, and practical. The first staging system for gynecological cancers appeared around the turn of the 20th century and applied to the carcinoma of the cervix uteri-the most common cancer affecting women in high income countries at that time. The classification and staging of the other gynecological malignancies was not put forward until the 1950s. Over the years, these staging classifications - with the exception of cervical cancer and gestational trophoblastic neoplasia - have shifted from a clinical to a surgical-pathological basis. This paper reviews the history of the International Federation of Gynecology and Obstetrics (FIGO) cancer staging system, how it was developed, and why.",
"title": ""
},
{
"docid": "bf29ab51f0f2bba9b96e8afb963635e7",
"text": "ÐThis paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is to say, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions. Commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm. Our second contribution is to cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows us to efficiently recover correspondence matches using singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding methods. Index TermsÐInexact graph matching, EM algorithm, matrix factorization, mixture models, Delaunay triangulations.",
"title": ""
},
{
"docid": "d0372369256f0661eadddfcc27c992d6",
"text": "Massive Open Online Courses (MOOCs) are a disruptive trend in education. Several initiatives have emerged during the last months to give support to MOOCs, and many educators have started offering courses as MOOCs in different areas and disciplines. However, designing a MOOC is not an easy task. Educators need to face not only pedagogical issues, but also other issues of logistical, technological and financial nature, as well as how these issues relate and constrain each other. Currently, little guidance is available for educators to address the design of MOOCs from scratch keeping a balance between all these issues. This paper proposes a conceptual framework for supporting educators in the description and design of MOOCs called the MOOC Canvas. The MOOC Canvas defines eleven interrelated issues that are addressed through a set of questions, offering a visual and understandable guidance for educators during the MOOC design process. As a practical usage example, this paper shows how the MOOC Canvas captures the description and design of a real 9-week MOOC. An analysis of the different elements of the course shed some light on the usage of the MOOC Canvas as a mechanism to address the description and design of MOOCs.",
"title": ""
},
{
"docid": "9e4da48d0fa4c7ff9566f30b73da3dc3",
"text": "Yang Song; Robert van Boeschoten University of Amsterdam Plantage Muidergracht 12, 1018 TV Amsterdam, the Netherlands y.song@uva.nl; r.m.van.boeschoten@hva.nl Abstract: Crowdfunding has been used as one of the effective ways for entrepreneurs to raise funding especially in creative industries. Individuals as well as organizations are paying more attentions to the emergence of new crowdfunding platforms. In the Netherlands, the government is also trying to help artists access financial resources through crowdfunding platforms. This research aims at discovering the success factors for crowdfunding projects from both founders’ and funders’ perspective. We designed our own website for founders and funders to observe crowdfunding behaviors. We linked our self-designed website to Google analytics in order to collect our data. Our research will contribute to crowdfunding success factors and provide practical recommendations for practitioners and researchers.",
"title": ""
},
{
"docid": "8b8be2f7a34f14c24443599cb570343f",
"text": "We present Audiopad, an interface for musical performance that aims to combine the modularity of knob based controllers with the expressive character of multidimensional tracking interfaces. The performer's manipulations of physical pucks on a tabletop control a real-time synthesis process. The pucks are embedded with LC tags that the system tracks in two dimensions with a series of specially shaped antennae. The system projects graphical information on and around the pucks to give the performer sophisticated control over the synthesis process. INTRODUCTION The late nineties saw the emergence of a new musical performance paradigm. Sitting behind the glowing LCDs on their laptops, electronic musicians could play their music in front of audiences without bringing a truckload of synthesizers and patch cables. However, the transition to laptop based performance created a rift between the performer and the audience as there was almost no stage presence for an onlooker to latch on to. Furthermore, the performers lost much of the real-time expressive power of traditional analog instruments. Their on-the-fly arrangements relied on inputs from their laptop keyboards and therefore lacked nuance, finesse, and improvisational capabilities.",
"title": ""
},
{
"docid": "72a6a7fe366def9f97ece6d1ddc46a2e",
"text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.",
"title": ""
},
{
"docid": "ebd65c03599cc514e560f378f676cc01",
"text": "The purpose of this paper is to examine an integrated model of TAM and D&M to explore the effects of quality features, perceived ease of use, perceived usefulness on users’ intentions and satisfaction, alongside the mediating effect of usability towards use of e-learning in Iran. Based on the e-learning user data collected through a survey, structural equations modeling (SEM) and path analysis were employed to test the research model. The results revealed that ‘‘intention’’ and ‘‘user satisfaction’’ both had positive effects on actual use of e-learning. ‘‘System quality’’ and ‘‘information quality’’ were found to be the primary factors driving users’ intentions and satisfaction towards use of e-learning. At last, ‘‘perceived usefulness’’ mediated the relationship between ease of use and users’ intentions. The sample consisted of e-learning users of four public universities in Iran. Past studies have seldom examined an integrated model in the context of e-learning in developing countries. Moreover, this paper tries to provide a literature review of recent published studies in the field of e-learning. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2c166ea3eb548135f44cc6afead34d61",
"text": "Yelp has been one of the most popular sites for users to rate and review local businesses. Businesses organize their own listings while users rate the business from 1− 5 stars and write text reviews. Users can also vote on other helpful or funny reviews written by other users. Using this enormous amount of data that Yelp has collected over the years, it would be meaningful if we could learn to predict ratings based on review‘s text alone, because free-text reviews are difficult for computer systems to understand, analyze and aggregate [1]. The idea can be extended to many other applications where assessment has traditionally been in the format of text and assigning a quick numerical rating is difficult. Examples include predicting movie or book ratings based on news articles or blogs [2], assigning ratings to YouTube videos based on viewers‘comments, and even more general sentiment analysis, sometimes also referred to as opinion mining.",
"title": ""
},
{
"docid": "3cabea669b02ca2653b880c0e0603005",
"text": "A simple method is presented to remedy the hysteresis problem associated with the gate dielectric of poly(4-vinyl phenol) (PVPh), which is widely used for organic transistors. The method involves simple blanket illumination of deep ultraviolet (UV) on the PVPh layer at room temperature. The exposure results in the photochemical transformation of hydroxyl groups in PVPh via the UV/ozone effect. This reduction in the concentration of hydroxyl groups enables one to effectively control the hysteresis problem even when the layer is exposed to moisture. The contrast created in the concentration of hydroxyl groups between the exposed and unexposed parts of PVPh also allows simultaneous patterning of the dielectric layer.",
"title": ""
},
{
"docid": "f5886c4e73fed097e44d6a0e052b143f",
"text": "A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "da9b9a32db674e5f6366f6b9e2c4ee10",
"text": "We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.",
"title": ""
},
{
"docid": "bf687d16bd11b4bae52c3ba96016ae93",
"text": "Neural attention has become central to many state-of-the-art models in natural language processing and related domains. Attention networks are an easy-to-train and effective method for softly simulating alignment; however, the approach does not marginalize over latent alignments in a probabilistic sense. This property makes it difficult to compare attention to other alignment approaches, to compose it with probabilistic models, and to perform posterior inference conditioned on observed data. A related latent approach, hard attention, fixes these issues, but is generally harder to train and less accurate. This work considers variational attention networks, alternatives to soft and hard attention for learning latent variable alignment models, with tighter approximation bounds based on amortized variational inference. We further propose methods for reducing the variance of gradients to make these approaches computationally feasible. Experiments show that for machine translation and visual question answering, inefficient exact latent variable models outperform standard neural attention, but these gains go away when using hard attention based training. On the other hand, variational attention retains most of the performance gain but with training speed comparable to neural attention.",
"title": ""
},
{
"docid": "892f23e1d796bfa45c9b6e0ea1af796e",
"text": "Many markets are converging, as communications and logistic networks become more integrated and firms from all parts of the world are expanding operations on a global scale. At the same time, other markets are becoming more diverse, and marketers are increasingly encountering economic and cultural heterogeneity. The authors examine the implications of these trends and the extent to which they necessitate rethinking and refocusing global marketing strategy. First, they examine the perspective of global marketing strategy as an evolutionary process. Next, they divide markets into five major spheres, examining the economic and cultural diversity of markets in each. Next, they discuss the resultant need to develop and implement different strategies for markets in each of these spheres. Some conclusions are drawn relating to the difficulties involved in developing a coherent strategy in international markets. The authors advocate developing a semiglobal marketing strategy, which involves following different directions in different parts of the world, resulting in greater autonomy at the local level.",
"title": ""
},
{
"docid": "c115af8ea687edb9769e7cef48a938ac",
"text": "High resolution imaging radars have come a long way since the early 90's, starting with an FAA Synthetic Vision System program at 35/94 GHz. These systems were heavy and bulky, carried a price tag of about $500K, and were only suitable for larger aircrafts at very small quantity production. Size, weight, and power constraints make 94 GHz still a preferred choice for many situational awareness applications ranging from landing in poor visibility due to fog or brown-out, to cable warning & obstacle avoidance and sense and avoid for unmanned aerial systems. Using COTS components and highly integrated MMIC modules, a complete radar breadboard has been demonstrated in 9 months in one line replacement unit with a total weight of 20 lbs. The new generation of this 94 GHz FMCW imaging sensor will be on the order of 15 lbs or less including the entire radar signal processor. The size and weight achievements of this sensor open up the potential market for rotorcrafts and general aviation.",
"title": ""
},
{
"docid": "5e7e74966751bba22ca66b02c4c91642",
"text": "To deal with the defects of BP neural networks used in balance control of inverted pendulum, such as longer train time and converging in partial minimum, this article reaLizes the control of double inverted pendulum with improved BP algorithm of artificial neural networks(ANN), builds up a training model of test simulation and the BP network is 6-10-1 structure. Tansig function is used in hidden layer and PureLin function is used in output layer, LM is used in training algorithm. The training data is acquried by three-loop PID algorithm. The model is learned and trained with Matlab calculating software, and the simuLink simulation experiment results prove that improved BP algorithm for inverted pendulum control has higher precision, better astringency and lower calculation. This algorithm has wide appLication on nonLinear control and robust control field in particular.",
"title": ""
},
{
"docid": "54c6e02234ce1c0f188dcd0d5ee4f04c",
"text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.",
"title": ""
},
{
"docid": "38aa324964214620c55eb4edfecf1bd2",
"text": "This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.",
"title": ""
},
{
"docid": "76e5081e71b4eba69e440174161c77c9",
"text": "This paper outlines the main assumptions of relevance theory (Sperber & Wilson 1985, 1995, 1998, 2002, Wilson & Sperber 2002), an inferential approach to pragmatics. Relevance theory is based on a definition of relevance and two principles of relevance: a Cognitive Principle (that human cognition is geared to the maximisation of relevance), and a Communicative Principle (that utterances create expectations of optimal relevance). We explain the motivation for these principles and illustrate their application to a variety of pragmatic problems. We end by considering the implications of this relevance-theoretic approach for the architecture of the mind.",
"title": ""
},
{
"docid": "ad2546a681a3b6bcef689f0bb71636b5",
"text": "Data and computation integrity and security are major concerns for users of cloud computing facilities. Many production-level clouds optimistically assume that all cloud nodes are equally trustworthy when dispatching jobs; jobs are dispatched based on node load, not reputation. This increases their vulnerability to attack, since compromising even one node suffices to corrupt the integrity of many distributed computations. This paper presents and evaluates Hatman: the first full-scale, data-centric, reputation-based trust management system for Hadoop clouds. Hatman dynamically assesses node integrity by comparing job replica outputs for consistency. This yields agreement feedback for a trust manager based on EigenTrust. Low overhead and high scalability is achieved by formulating both consistency-checking and trust management as secure cloud computations; thus, the cloud's distributed computing power is leveraged to strengthen its security. Experiments demonstrate that with feedback from only 100 jobs, Hatman attains over 90% accuracy when 25% of the Hadoop cloud is malicious.",
"title": ""
},
{
"docid": "c8fe38c7c0e9ca6c7098ec1571a25d73",
"text": "Middleboxes play a major role in contemporary networks, as forwarding packets is often not enough to meet operator demands, and other functionalities (such as security, QoS/QoE provisioning, and load balancing) are required. Traffic is usually routed through a sequence of such middleboxes, which either reside across the network or in a single, consolidated location. Although middleboxes provide a vast range of different capabilities, there are components that are shared among many of them.\n A task common to almost all middleboxes that deal with L7 protocols is Deep Packet Inspection (DPI). Today, traffic is inspected from scratch by all the middleboxes on its route. In this paper, we propose to treat DPI as a service to the middleboxes, implying that traffic should be scanned only once, but against the data of all middleboxes that use the service. The DPI service then passes the scan results to the appropriate middleboxes. Having DPI as a service has significant advantages in performance, scalability, robustness, and as a catalyst for innovation in the middlebox domain. Moreover, technologies and solutions for current Software Defined Networks (SDN) (e.g., SIMPLE [41]) make it feasible to implement such a service and route traffic to and from its instances.",
"title": ""
}
] |
scidocsrr
|
10e85f141507f1e5a014f90fe03d4d41
|
Control of single-phase cascaded H-bridge multilevel inverter with modified MPPT for grid-connected photovoltaic systems
|
[
{
"docid": "d06e4f97786f8ecf9694ed270a36c24a",
"text": "In this paper, an improved maximum power point (MPP) tracking (MPPT) with better performance based on voltage-oriented control (VOC) is proposed to solve a fast-changing irradiation problem. In VOC, a cascaded control structure with an outer dc link voltage control loop and an inner current control loop is used. The currents are controlled in a synchronous orthogonal d,q frame using a decoupled feedback control. The reference current of proportional-integral (PI) d-axis controller is extracted from the dc-side voltage regulator by applying the energy-balancing control. Furthermore, in order to achieve a unity power factor, the q-axis reference is set to zero. The MPPT controller is applied to the reference of the outer loop control dc voltage photovoltaic (PV). Without PV array power measurement, the proposed MPPT identifies the correct direction of the MPP by processing the d-axis current reflecting the power grid side and the signal error of the PI outer loop designed to only represent the change in power due to the changing atmospheric conditions. The robust tracking capability under rapidly increasing and decreasing irradiance is verified experimentally with a PV array emulator. Simulations and experimental results demonstrate that the proposed method provides effective, fast, and perfect tracking.",
"title": ""
},
{
"docid": "4bd123c2c44e703133e9a6093170db39",
"text": "This paper presents a single-phase cascaded H-bridge converter for a grid-connected photovoltaic (PV) application. The multilevel topology consists of several H-bridge cells connected in series, each one connected to a string of PV modules. The adopted control scheme permits the independent control of each dc-link voltage, enabling, in this way, the tracking of the maximum power point for each string of PV panels. Additionally, low-ripple sinusoidal-current waveforms are generated with almost unity power factor. The topology offers other advantages such as the operation at lower switching frequency or lower current ripple compared to standard two-level topologies. Simulation and experimental results are presented for different operating conditions.",
"title": ""
}
] |
[
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "ad526a01f76956af87be7287c5cdb964",
"text": "Model-based reinforcement learning is a powerful paradigm for learning tasks in robotics. However, in-depth exploration is usually required and the actions have to be known in advance. Thus, we propose a novel algorithm that integrates the option of requesting teacher demonstrations to learn new domains with fewer action executions and no previous knowledge. Demonstrations allow new actions to be learned and they greatly reduce the amount of exploration required, but they are only requested when they are expected to yield a significant improvement because the teacher’s time is considered to be more valuable than the robot’s time. Moreover, selecting the appropriate action to demonstrate is not an easy task, and thus some guidance is provided to the teacher. The rule-based model is analyzed to determine the parts of the state that may be incomplete, and to provide the teacher with a set of possible problems for which a demonstration is needed. Rule analysis is also used to find better alternative models and to complete subgoals before requesting help, thereby minimizing the number of requested demonstrations. These improvements were demonstrated in a set of experiments, which included domains from the international planning competition and a robotic task. Adding teacher demonstrations and rule analysis reduced the amount of exploration required by up to 60% in some domains, and improved the success ratio by 35% in other domains.",
"title": ""
},
{
"docid": "4a518f4cdb34f7cff1d75975b207afe4",
"text": "In this paper, the design and measurement results of a highly efficient 1-Watt broadband class J SiGe power amplifier (PA) at 700 MHz are reported. Comparisons between a class J PA and a traditional class AB/B PA have been made, first through theoretical analysis in terms of load network, efficiency and bandwidth behavior, and secondly by bench measurement data. A single-ended power cell is designed and fabricated in the 0.35 μm IBM 5PAe SiGe BiCMOS technology with through-wafer-vias (TWVs). Watt-level output power with greater than 50% efficiency is achieved on bench across a wide bandwidth of 500 MHz to 900 MHz for the class J PA (i.e., >;57% bandwidth at the center frequency of 700 MHz). Psat of 30.9 dBm with 62% collector efficiency (CE) at 700 MHz is measured while the highest efficiency of 68.9% occurs at 650 MHz using a 4.2 V supply. Load network of this class J PA is realized with lumped passive components on a FR4 printed circuit board (PCB). A narrow-band class AB PA counterpart is also designed and fabricated for comparison. The data suggests that the broadband class J SiGe PA can be promising for future multi-band wireless applications.",
"title": ""
},
{
"docid": "b5f9d2f5c401be98b5e9546c0abaef22",
"text": "This paper describes a new approach for training generative adversarial networks (GAN) to understand the detailed 3D shape of objects. While GANs have been used in this domain previously, they are notoriously hard to train, especially for the complex joint data distribution over 3D objects of many categories and orientations. Our method extends previous work by employing the Wasserstein distance normalized with gradient penalization as a training objective. This enables improved generation from the joint object shape distribution. Our system can also reconstruct 3D shape from 2D images and perform shape completion from occluded 2.5D range scans. We achieve notable quantitative improvements in comparison to existing baselines.",
"title": ""
},
{
"docid": "53edb03722153d091fb2e78c811d4aa5",
"text": "One of the main reasons for failure in Software Process Improvement (SPI) initiatives is the lack of motivation of the professionals involved. Therefore, motivation should be encouraged throughout the software process. Gamification allows us to define mechanisms that motivate people to develop specific tasks. A gamification framework was adapted to the particularities of an organization and software professionals to encourage motivation. Thus, it permitted to facilitate the adoption of SPI improvements and a higher success rate. The objective of this research was to validate the framework presented and increase the actual implementation of gamification in organizations. To achieve this goal, a qualitative research methodology was employed through interviews that involved a total of 29 experts in gamification and SPI. The results of this study confirm the validity of the framework presented, its relevance in the field of SPI and its alignment with the standard practices of gamification implementation within organizations.",
"title": ""
},
{
"docid": "874cecfb3f21f4c145fda262e1eee369",
"text": "For many languages that use non-Roman based indigenous scripts (e.g., Arabic, Greek and Indic languages) one can often find a large amount of user generated transliterated content on the Web in the Roman script. Such content creates a monolingual or multi-lingual space with more than one script which we refer to as the Mixed-Script space. IR in the mixed-script space is challenging because queries written in either the native or the Roman script need to be matched to the documents written in both the scripts. Moreover, transliterated content features extensive spelling variations. In this paper, we formally introduce the concept of Mixed-Script IR, and through analysis of the query logs of Bing search engine, estimate the prevalence and thereby establish the importance of this problem. We also give a principled solution to handle the mixed-script term matching and spelling variation where the terms across the scripts are modelled jointly in a deep-learning architecture and can be compared in a low-dimensional abstract space. We present an extensive empirical analysis of the proposed method along with the evaluation results in an ad-hoc retrieval setting of mixed-script IR where the proposed method achieves significantly better results (12% increase in MRR and 29% increase in MAP) compared to other state-of-the-art baselines.",
"title": ""
},
{
"docid": "09ad5f732edf32059f8faf0a10a209f9",
"text": "Article history: Received 19 August 2008 Received in revised form 22 May 2009 Accepted 5 August 2009",
"title": ""
},
{
"docid": "49fb2ea90926df68eb29c2d1ed643cd0",
"text": "The neural network model has been the fulcrum of the so-called AI revolution. Although very powerful for pattern-recognition tasks, however, the model has two main drawbacks: it tends to overfit when the training dataset is small, and it is unable to accurately capture category information when the class number is large. In this paper, we combine reinforcement learning, generative adversarial networks, and recurrent neural networks to build a new model, termed category sentence generative adversarial network (CS-GAN). Not only the proposed model is able to generate category sentences that enlarge the original dataset, but also it helps improve its generalization capability during supervised training. We evaluate the performance of CS-GAN for the task of sentiment analysis. Quantitative evaluation exhibits the accuracy improvement in polarity detection on a small dataset with high category information.",
"title": ""
},
{
"docid": "5fa019a88de4a1683ee63b2a25f8c285",
"text": "Metabolomics is increasingly being applied towards the identification of biomarkers for disease diagnosis, prognosis and risk prediction. Unfortunately among the many published metabolomic studies focusing on biomarker discovery, there is very little consistency and relatively little rigor in how researchers select, assess or report their candidate biomarkers. In particular, few studies report any measure of sensitivity, specificity, or provide receiver operator characteristic (ROC) curves with associated confidence intervals. Even fewer studies explicitly describe or release the biomarker model used to generate their ROC curves. This is surprising given that for biomarker studies in most other biomedical fields, ROC curve analysis is generally considered the standard method for performance assessment. Because the ultimate goal of biomarker discovery is the translation of those biomarkers to clinical practice, it is clear that the metabolomics community needs to start “speaking the same language” in terms of biomarker analysis and reporting-especially if it wants to see metabolite markers being routinely used in the clinic. In this tutorial, we will first introduce the concept of ROC curves and describe their use in single biomarker analysis for clinical chemistry. This includes the construction of ROC curves, understanding the meaning of area under ROC curves (AUC) and partial AUC, as well as the calculation of confidence intervals. The second part of the tutorial focuses on biomarker analyses within the context of metabolomics. This section describes different statistical and machine learning strategies that can be used to create multi-metabolite biomarker models and explains how these models can be assessed using ROC curves. In the third part of the tutorial we discuss common issues and potential pitfalls associated with different analysis methods and provide readers with a list of nine recommendations for biomarker analysis and reporting. To help readers test, visualize and explore the concepts presented in this tutorial, we also introduce a web-based tool called ROCCET (ROC Curve Explorer & Tester, http://www.roccet.ca ). ROCCET was originally developed as a teaching aid but it can also serve as a training and testing resource to assist metabolomics researchers build biomarker models and conduct a range of common ROC curve analyses for biomarker studies.",
"title": ""
},
{
"docid": "af45e4aa653af4e2f2ece29f965aaafc",
"text": "We use Reinforcement Learning (RL) to learn question-answering dialogue policies for a real-world application. We analyze a corpus of interactions of museum visitors with two virtual characters that serve as guides at the Museum of Science in Boston, in order to build a realistic model of user behavior when interacting with these characters. A simulated user is built based on this model and used for learning the dialogue policy of the virtual characters using RL. Our learned policy outperforms two baselines (including the original dialogue policy that was used for collecting the corpus) in a simulation setting.",
"title": ""
},
{
"docid": "175cc14a49e40fade4424c5cf62a6073",
"text": "A 2.5 GHz low phase noise oscillator is presented in this paper. The oscillator was design using a Solidly Mounted BAW Resonator (SMR BAW) as resonant element. The resonator exhibits a parallel resonance Q factor around 1300 at 2.54 GHz. The core oscillator was designed using STMicroelectronics 65 nm CMOS technology. It exhibits 632 mV output (zero-to-pic) with phase noise performance of -92 dBc/Hz, -109 dBc/Hz, and -130 dBc/Hz at 2 KHz, 10 KHz and 100 KHz respectively. It consumes 1 mA from a 1.2 V source.",
"title": ""
},
{
"docid": "ec36f7ad0a916ab4040b0fddbf7b1172",
"text": "To review the state of research on the association between sleep among school-aged children and academic outcomes, the authors reviewed published studies investigating sleep, school performance, and cognitive and achievement tests. Tables with brief descriptions of each study's research methods and outcomes are included. Research reveals a high prevalence among school-aged children of suboptimal amounts of sleep and poor sleep quality. Research demonstrates that suboptimal sleep affects how well students are able to learn and how it may adversely affect school performance. Recommendations for further research are discussed.",
"title": ""
},
{
"docid": "3473417f1701c82a4a06c00545437a3c",
"text": "The eXtensible Markup Language (XML) and related technologies offer promise for (among other things) applying data management technology to documents, and also for providing a neutral syntax for interoperability among disparate systems. But like many new technologies, it has raised unrealistic expectations. We give an overview of XML and related standards, and offer opinions to help separate vaporware (with a chance of solidifying) from hype. In some areas, XML technologies may offer revolutionary improvements, such as in processing databases' outputs and extending data management to semi-structured data. For some goals, either a new class of DBMSs is required, or new standards must be built. For such tasks, progress will occur, but may be measured in ordinary years rather than Web time. For hierarchical formatted messages that do not need maximum compression (e.g., many military messages), XML may have considerable benefit. For interoperability among enterprise systems, XML's impact may be moderate as an improved basis for software, but great in generating enthusiasm for standardizing concepts and schemas.",
"title": ""
},
{
"docid": "4041235ab6ad93290ed90cdf5e07d6e5",
"text": "This article describes Apron, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.",
"title": ""
},
{
"docid": "355fca41993ea19b08d2a9fc19e25722",
"text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.",
"title": ""
},
{
"docid": "40773627971f35b0af1e5f8d325e8118",
"text": "This tutorial covers the Dirichlet distribution, Dirichlet process, Pólya urn (and the associated Chinese restaurant process), hierarchical Dirichlet Process, and the Indian buffet process. Apart from basic properties, we describe and contrast three methods of generating samples: stick-breaking, the Pólya urn, and drawing gamma random variables. For the Dirichlet process we first present an informal introduction, and then a rigorous description for those more comfortable with probability theory.",
"title": ""
},
{
"docid": "ef9235285ebbef109254bfb5968d2d6b",
"text": "This paper proposes Dyadic Memory Networks (DyMemNN), a novel extension of end-to-end memory networks (memNN) for aspect-based sentiment analysis (ABSA). Originally designed for question answering tasks, memNN operates via a memory selection operation in which relevant memory pieces are adaptively selected based on the input query. In the problem of ABSA, this is analogous to aspects and documents in which the relationship between each word in the document is compared with the aspect vector. In the standard memory networks, simple dot products or feed forward neural networks are used to model the relationship between aspect and words which lacks representation learning capability. As such, our dyadic memory networks ameliorates this weakness by enabling rich dyadic interactions between aspect and word embeddings by integrating either parameterized neural tensor compositions or holographic compositions into the memory selection operation. To this end, we propose two variations of our dyadic memory networks, namely the Tensor DyMemNN and Holo DyMemNN. Overall, our two models are end-to-end neural architectures that enable rich dyadic interaction between aspect and document which intuitively leads to better performance. Via extensive experiments, we show that our proposed models achieve the state-of-the-art performance and outperform many neural architectures across six benchmark datasets.",
"title": ""
},
{
"docid": "de1c8cdb894641c0cf887556969c9770",
"text": "We document considerable return comovement associated with accruals after controlling for other common factors. An accrual-based factor-mimicking portfolio has a Sharpe ratio of 0.15, higher than that of the market factor or the HML factor of Fama and French (1993). In time series regressions, a model that includes the Fama-French factors and the additional accrual factor captures the accrual anomaly in average returns. However, further time series and cross-sectional tests indicate that it is the accrual characteristic rather than the accrual factor loading that predicts returns. These findings favor a behavioral explanation for the accrual anomaly.",
"title": ""
},
{
"docid": "71bd071b09ba6323877f7e9a51145751",
"text": "We introduce multilingual image description, the task of generating descriptions of images given data in multiple languages. This can be viewed as visually-grounded machine translation, allowing the image to play a role in disambiguating language. We present models for this task that are inspired by neural models for image description and machine translation. Our multilingual image description models generate target-language sentences using features transferred from separate models: multimodal features from a monolingual source-language image description model and visual features from an object recognition model. In experiments on a dataset of images paired with English and German sentences, using BLEU and Meteor as a metric, our models substantially improve upon existing monolingual image description models.",
"title": ""
},
{
"docid": "258d98751d5b3cf4f33bf9473a678cf4",
"text": "A Blockchain is a public immutable distributed ledger and stores a various kinds of transactions. Because there is no central authority that regulates the system and users don’t trust eath other, a blockchain system needs an algorithm for users to reach consensus on block creation. In this report, we will explore 3 consensus algorithms: Proof-of-Work, Proof-ofStake and Proof-of-Activity.",
"title": ""
}
] |
scidocsrr
|
861c2e4d74910d7a818968ccff95b122
|
The State of Electronic Word-Of-Mouth Research: A Literature Analysis
|
[
{
"docid": "b4880ddb59730f465f585f3686d1d2b1",
"text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.",
"title": ""
},
{
"docid": "b445de6f864c345d90162cb8b2527240",
"text": "he growing popularity of online product review forums invites the development of models and metrics that allow firms to harness these new sources of information for decision support. Our work contributes in this direction by proposing a novel family of diffusion models that capture some of the unique aspects of the entertainment industry and testing their performance in the context of very early postrelease motion picture revenue forecasting. We show that the addition of online product review metrics to a benchmark model that includes prerelease marketing, theater availability and professional critic reviews substantially increases its forecasting accuracy; the forecasting accuracy of our best model outperforms that of several previously published models. In addition to its contributions in diffusion theory, our study reconciles some inconsistencies among previous studies with respect to what online review metrics are statistically significant in forecasting entertainment good sales. CHRYSANTHOS DELLAROCAS, XIAOQUAN (MICHAEL) ZHANG, AND NEVEEN F. AWAD",
"title": ""
}
] |
[
{
"docid": "c81ad743ab41e4601cc4f33631ee3f93",
"text": "We present a technique to enhance control-flow analysis of business process models. The technique considerably speeds up the analysis an d improves the diagnostic information that is given to the user to fix control-flow errors . The technique consists of two parts: Firstly, the process model is decomp osed into single-entry-single-exit (SESE) fragments, which are usually subs tantially smaller than the original process. This decomposition is done in linear time. S econdly, each fragment is analyzed in isolation using a fast heuristic that ca n an lyze many of the fragments occurring in practice. Any remaining fragme nts that are not covered by the heuristic can then be analyzed using any known c omplete analysis technique. We used our technique in a case study with more than 340 real business pr ocesses modeled with the IBM WebSphere Business Modeler. The results s uggest that control-flow analysis of many real process models is feasible withou t significant delay (less than a second). Therefore, control-flow analysis co uld be used frequently during editing time, which allows errors to be caught at earliest possible time.",
"title": ""
},
{
"docid": "25f39a66710db781f4354f0da5974d61",
"text": "With the rapid development of economy in China over the past decade, air pollution has become an increasingly serious problem in major cities and caused grave public health concerns in China. Recently, a number of studies have dealt with air quality and air pollution. Among them, some attempt to predict and monitor the air quality from different sources of information, ranging from deployed physical sensors to social media. These methods are either too expensive or unreliable, prompting us to search for a novel and effective way to sense the air quality. In this study, we propose to employ the state of the art in computer vision techniques to analyze photos that can be easily acquired from online social media. Next, we establish the correlation between the haze level computed directly from photos with the official PM 2.5 record of the taken city at the taken time. Our experiments based on both synthetic and real photos have shown the promise of this image-based approach to estimating and monitoring air pollution.",
"title": ""
},
{
"docid": "e651af2be422e13548af7d3152d27539",
"text": "A sample of 116 children (M=6 years 7 months) in Grade 1 was randomly assigned to experimental (n=60) and control (n=56) groups, with equal numbers of boys and girls in each group. The experimental group received a program aimed at improving representation and transformation of visuospatial information, whereas the control group received a substitute program. All children were administered mental rotation tests before and after an intervention program and a Global-Local Processing Strategies test before the intervention. The results revealed that initial gender differences in spatial ability disappeared following treatment in the experimental but not in the control group. Gender differences were moderated by strategies used to process visuospatial information. Intervention and processing strategies were essential in reducing gender differences in spatial abilities.",
"title": ""
},
{
"docid": "250b5717e5a8bd0677f9ab71123d6390",
"text": "With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge. We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels. We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved competitive accuracies of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within a 1–3 second window, without needing an observation of entire training trial. This study highlights the potential of deep architectures for efficient online skill assessment in modern surgical training.",
"title": ""
},
{
"docid": "429c900f6ac66bcea5aa068d27f5b99f",
"text": "Recent researches shows that Brain Computer Interface (BCI) technology provides effective way of communication between human and physical device. In this work, an EEG based wireless mobile robot is implemented for people suffer from motor disabilities can interact with physical devices based on Brain Computer Interface (BCI). An experimental model of mobile robot is explored and it can be controlled by human eye blink strength. EEG signals are acquired from NeuroSky Mind wave Sensor (single channel prototype) in non-invasive manner and Signal features are extracted by adopting Discrete Wavelet Transform (DWT) to amend the signal resolution. We analyze and compare the db4 and db7 wavelets for accurate classification of blink signals. Different classes of movements are achieved based on different blink strength of user. The experimental setup of adaptive human machine interface system provides better accuracy and navigates the mobile robot based on user command, so it can be adaptable for disabled people.",
"title": ""
},
{
"docid": "a2b858e253a2f5075ae294e52c0b3bb7",
"text": "Learning and evolution are two fundamental forms of adaptation. There has been a great interest in combining learning and evolution with artificial neural networks (ANN’s) in recent years. This paper: 1) reviews different combinations between ANN’s and evolutionary algorithms (EA’s), including using EA’s to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA’s; and 3) points out possible future research directions. It is shown, through a considerably large literature review, that combinations between ANN’s and EA’s can lead to significantly better intelligent systems than relying on ANN’s or EA’s alone.",
"title": ""
},
{
"docid": "6b70a42b41de6831604e14904f682b69",
"text": "A large proportion of the Indian population is excluded from basic banking services. Just one in two Indians has access to a savings bank account and just one in seven Indians has access to bank credit (Business Standard, June 28 2013). There are merely 684 million savings bank accounts in the country with a population of 1.2 billion. Branch per 100,000 adult ratio in India stands at 747 compared to 1,065 for Brazil and 2,063 for Malaysia (World Bank Financial Access Report 2010). As more people, especially the poor, gain access to financial services, they will be able to save better and get access to funding in a more structured manner. This will reduce income inequality, help the poor up the ladder, and contribute to economic development. There is a need for transactions and savings accounts for the under-served in the population. Mobile banking has been evolved in last couple of years with the help of Mobile penetration, which has shown phenomenal growth in rural areas of India. The rural subscription increased from 398.68 million at the end of December 2014 to 404.16 million at the end of January 2015, said in a statement by the Telecom Regulatory Authority of India. Banks in India are already investing in mobile technology and security from last couple of years. They are adding value in services such as developing smartphone apps, mobile wallets and educating consumers about the benefits of using the mobile banking resulting in adoption of mobile banking faster among consumers as compared to internet banking.\n The objective of this study is:\n 1. To understand the scope of mobile banking to reach unbanked population in India.\n 2. To analyze the learnings of M-PESA and Payments Bank Opportunity.\n 3. To evaluate the upcoming challenges for the payments bank success in India.",
"title": ""
},
{
"docid": "d4ab2085eec138f99d4d490b0cbf9e3a",
"text": "A frequency-reconfigurable microstrip slot antenna is proposed. The antenna is capable of frequency switching at six different frequency bands between 2.2 and 4.75 GHz. Five RF p-i-n diode switches are positioned in the slot to achieve frequency reconfigurability. The feed line and the slot are bended to reduce 33% of the original size of the antenna. The biasing circuit is integrated into the ground plane to minimize the parasitic effects toward the performance of the antenna. Simulated and measured results are used to demonstrate the performance of the antenna. The simulated and measured return losses, together with the radiation patterns, are presented and compared.",
"title": ""
},
{
"docid": "eae6688a21cdfc2a39d14486b2c9e8eb",
"text": "Chronic kidney disease (CKD) is a major public health concern with rising prevalence. In this study we consider 24 predictive parameters and create a machine learning classifier to detect CKD. We evaluate our approach on a dataset of 400 individuals, where 250 of them have CKD. Using our approach we achieve a detection accuracy of 0.993 according to the F1-measure with 0.1084 root mean square error. This is a 56% reduction of mean square error compared to the state of the art (i.e., the CKD-EPI equation: a glomerular filtration rate estimator). We also perform feature selection to determine the most relevant attributes for detecting CKD and rank them according to their predictability. We identify new predictive attributes which have not been used by any previous GFR estimator equations. Finally, we perform a cost-accuracy tradeoff analysis to identify a new CKD detection approach with high accuracy and low cost.",
"title": ""
},
{
"docid": "6465daca71e18cb76ec5442fb94f625a",
"text": "In this paper, we show how an open-source, language-independent proofreading tool has been built. Many languages lack contextual proofreading tools; for many others, only partial solutions are available. Using existing, largely language-independent tools and collaborative processes it is possible to develop a practical style and grammar checker and to fight the digital divide in countries where commercial linguistic application software is unavailable or too expensive for average users. The described solution depends on relatively easily available language resources and does not require a fully formalized grammar nor a deep parser, yet it can detect many frequent context-dependent spelling mistakes, as well as grammatical, punctuation, usage, and stylistic errors. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "46db4cfa5ccb08da3ca884ad794dc419",
"text": "Mutation testing of Python programs raises a problem of incompetent mutants. Incompetent mutants cause execution errors due to inconsistency of types that cannot be resolved before run-time. We present a practical approach in which incompetent mutants can be generated, but the solution is transparent for a user and incompetent mutants are detected by a mutation system during test execution. Experiments with 20 traditional and object-oriented operators confirmed that the overhead can be accepted. The paper presents an experimental evaluation of the first- and higher-order mutation. Four algorithms to the 2nd and 3rd order mutant generation were applied. The impact of code coverage consideration on the process efficiency is discussed. The experiments were supported by the MutPy system for mutation testing of Python programs.",
"title": ""
},
{
"docid": "1c16fa259b56e3d64f2468fdf758693a",
"text": "Dysregulated expression of microRNAs (miRNAs) in various tissues has been associated with a variety of diseases, including cancers. Here we demonstrate that miRNAs are present in the serum and plasma of humans and other animals such as mice, rats, bovine fetuses, calves, and horses. The levels of miRNAs in serum are stable, reproducible, and consistent among individuals of the same species. Employing Solexa, we sequenced all serum miRNAs of healthy Chinese subjects and found over 100 and 91 serum miRNAs in male and female subjects, respectively. We also identified specific expression patterns of serum miRNAs for lung cancer, colorectal cancer, and diabetes, providing evidence that serum miRNAs contain fingerprints for various diseases. Two non-small cell lung cancer-specific serum miRNAs obtained by Solexa were further validated in an independent trial of 75 healthy donors and 152 cancer patients, using quantitative reverse transcription polymerase chain reaction assays. Through these analyses, we conclude that serum miRNAs can serve as potential biomarkers for the detection of various cancers and other diseases.",
"title": ""
},
{
"docid": "eb0da55555e816d706908e0695075dc5",
"text": "With the fast progression of digital data exchange information security has become an important issue in data communication. Encryption algorithms play an important role in information security system. These algorithms use techniques to enhance the data confidentiality and privacy by making the information indecipherable which can be only be decoded or decrypted by party those possesses the associated key. But at the same time, these algorithms consume a significant amount of computing resources such as CPU time, memory, and battery power. So we need to evaluate the performance of different cryptographic algorithms to find out best algorithm to use in future. This paper provides evaluation of both symmetric (AES, DES, Blowfish) as well as asymmetric (RSA) cryptographic algorithms by taking different types of files like Binary, text and image files. A comparison has been conducted for these encryption algorithms using evaluation parameters such as encryption time, decryption time and throughput. Simulation results are given to demonstrate the effectiveness of each.",
"title": ""
},
{
"docid": "9e95ce11f502478c11df990d3465360f",
"text": "This paper presents a ultra-wideband (UWB) micro-strip structure high-pass filter with multi-stubs. The proposed filter was designed using a combination of 4 short-circuited stubs and an open-circuited stub in the form of micro-strip lines. The short-circuited stubs are to realize a high-pass filter with a bad band rejection. In order to achieve a steep cutoff, a transmission zero can be added thus an open-circuited stub is used. The passband is 5-19 GHz. The insertion loss is greater than -2dB and the return loss is less than -10dB, while the suppression of the modified filter is better than 30 dB below 4.2GHz.",
"title": ""
},
{
"docid": "98978373c863f49ed7cccda9867b8a5e",
"text": "Increasing vulnerability of plants to a variety of stresses such as drought, salt and extreme temperatures poses a global threat to sustained growth and productivity of major crops. Of these stresses, drought represents a considerable threat to plant growth and development. In view of this, developing staple food cultivars with improved drought tolerance emerges as the most sustainable solution toward improving crop productivity in a scenario of climate change. In parallel, unraveling the genetic architecture and the targeted identification of molecular networks using modern \"OMICS\" analyses, that can underpin drought tolerance mechanisms, is urgently required. Importantly, integrated studies intending to elucidate complex mechanisms can bridge the gap existing in our current knowledge about drought stress tolerance in plants. It is now well established that drought tolerance is regulated by several genes, including transcription factors (TFs) that enable plants to withstand unfavorable conditions, and these remain potential genomic candidates for their wide application in crop breeding. These TFs represent the key molecular switches orchestrating the regulation of plant developmental processes in response to a variety of stresses. The current review aims to offer a deeper understanding of TFs engaged in regulating plant's response under drought stress and to devise potential strategies to improve plant tolerance against drought.",
"title": ""
},
{
"docid": "8c214f081f47e12d4dccd71b6038d3bf",
"text": "Switched reluctance machines (SRMs) are considered as serious candidates for starter/alternator (S/A) systems in more electric cars. Robust performance in the presence of high temperature, safe operation, offering high efficiency, and a very long constant power region, along with a rugged structure contribute to their suitability for this high impact application. To enhance these qualities, we have developed key technologies including sensorless operation over the entire speed range and closed-loop torque and speed regulation. The present paper offers an in-depth analysis of the drive dynamics during motoring and generating modes of operation. These findings will be used to explain our control strategies in the context of the S/A application. Experimental and simulation results are also demonstrated to validate the practicality of our claims.",
"title": ""
},
{
"docid": "b4ecb4c62562517b9b16088ad8ae8c22",
"text": "This articleii presents the results of video-based Human Robot Interaction (HRI) trials which investigated people’s perceptions of different robot appearances and associated attention-seeking features and behaviors displayed by robots with different appearance and behaviors. The HRI trials studied the participants’ preferences for various features of robot appearance and behavior, as well as their personality attributions towards the robots compared to their own personalities. Overall, participants tended to prefer robots with more human-like appearance and attributes. However, systematic individual differences in the dynamic appearance ratings are not consistent with a universal effect. Introverts and participants with lower emotional stability tended to prefer the mechanical looking appearance to a greater degree than other participants. It is also shown that it is possible to rate individual elements of a particular robot’s behavior and then assess the contribution, or otherwise, of that element to the overall perception of the robot by people. Relating participants’ dynamic appearance ratings of individual robots to independent static appearance ratings provided evidence that could be taken to support a portion of the left hand side of Mori’s theoretically proposed ‘uncanny valley’ diagram. Suggestions for future work are outlined. I.INTRODUCTION Robots that are currently commercially available for use in a domestic environment and which have human interaction features are often orientated towards toy or entertainment functions. In the future, a robot companion which is to find a more generally useful place within a human oriented domestic environment, and thus sharing a private home with a person or family, must satisfy two main criteria (Dautenhahn et al. (2005); Syrdal et al. (2006); Woods et al. (2007)): It must be able to perform a range of useful tasks or functions. It must carry out these tasks or functions in a manner that is socially acceptable and comfortable for people it shares the environment with and/or it interacts with. The technical challenges in getting a robot to perform useful tasks are extremely difficult, and many researchers are currently researching into the technical capabilities that will be required to perform useful functions in a human centered environment including navigation, manipulation, vision, speech, sensing, safety, system integration and planning. The second criteria is arguably equally important, because if the robot does not exhibit socially acceptable behavior, then people may reject the robot if it is annoying, irritating, unsettling or frightening to human users. Therefore: How can a robot behave in a socially acceptable manner? Research into social robots is generally contained within the rapidly developing field of Human-Robot Interaction (HRI). For an overview of socially interactive robots (robots designed to interact with humans in a social way) see Fong et al. (2003). Relevant examples of studies and investigations into human reactions to robots include: Goetz et al. (2003) where issues of robot appearance, behavior and task domains were investigated, and Severinson-Eklundh et al. (2003) which documents a longitudinal HRI trial investigating the human perspective of using a robotic assistant over several weeks . Khan (1998), Scopelliti et al. (2004) and Dautenhahn et al. (2005) have surveyed peoples’ views of domestic robots in order to aid the development of an initial design specification for domestic or servant robots. Kanda et al. (2004) presents results from a longitudinal HRI trial with a robot as a social partner and peer tutor aiding children learning English.",
"title": ""
},
{
"docid": "2cc1383f98adb6f9e522fe2b933d35e5",
"text": "This paper presents the innovative design of an air cooled permanent magnet assisted synchronous reluctance machine (PMaSyRM) for automotive traction application. Key design features include low cost ferrite magnets in an optimized rotor geometry with high saliency ratio, low weight and sufficient mechanical strength as well as a tailored hairpin stator winding in order to meet the demands of an A-segment battery electric vehicle (BEV). Effective torque ripple reduction techniques are analyzed and a suitable combination is chosen to keep additional manufacturing measures as low as possible. Although the ferrite magnets exhibit low remanence, it is shown that their contribution to the electrical machine's performance is essential in the field weakening region. Efficiency optimized torque-speed-characteristics are identified, including additional losses of the inverter, showing an overall system efficiency of more than 94 %. Lastly, the results of no load measurements of a prototype are compared to the FEM simulation results, indicating the proposed design of a PMaSyRM as a cost-effective alternative to state-of-the-art permanent magnet synchronous machines (PMSM) for vehicle traction purposes.",
"title": ""
},
{
"docid": "333e2df79425177f0ce2686bd5edbfbe",
"text": "The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.",
"title": ""
},
{
"docid": "1e80f38e3ccc1047f7ee7c2b458c0beb",
"text": "This thesis presents an approach to robot arm control exploiting natural dynamics. The approach consists of using a compliant arm whose joints are controlled with simple non-linear oscillators. The arm has special actuators which makes it robust to collisions and gives it a smooth compliant, motion. The oscillators produce rhythmic commands of the joints of the arm, and feedback of the joint motions is used to modify the oscillator behavior. The oscillators enable the resonant properties of the arm to be exploited to perform a variety of rhythmic and discrete tasks. These tasks include tuning into the resonant frequencies of the arm itself, juggling, turning cranks, playing with a Slinky toy, sawing wood, throwing balls, hammering nails and drumming. For most of these tasks, the controllers at each joint are completely independent, being coupled by mechanical coupling through the physical arm of the robot. The thesis shows that this mechanical coupling allows the oscillators to automatically adjust their commands to be appropriate for the arm dynamics and the task. This coordination is robust to large changes in the oscillator parameters, and large changes in the dynamic properties of the arm. As well as providing a wealth of experimental data to support this approach, the thesis also provides a range of analysis tools, both approximate and exact. These can be used to understand and predict the behavior of current implementations, and design new ones. These analysis techniques improve the value of oscillator solutions. The results in the thesis suggest that the general approach of exploiting natural dynamics is a powerful method for obtaining coordinated dynamic behavior of robot arms. Thesis Supervisor: Rodney A. Brooks Title: Professor of Electrical Engineering and Computer Science, MIT 5.4. CASE (C): MODIFYING THE NATURAL DYNAMICS 95",
"title": ""
}
] |
scidocsrr
|
fa91f3c9e6426790765f00d995b88134
|
Image Popularity Prediction in Social Media Using Sentiment and Context Features
|
[
{
"docid": "d5647902c65b76a86ea800f1ae60c37d",
"text": "Understanding the factors that impact the popularity dynamics of social media can drive the design of effective information services, besides providing valuable insights to content generators and online advertisers. Taking YouTube as case study, we analyze how video popularity evolves since upload, extracting popularity trends that characterize groups of videos. We also analyze the referrers that lead users to videos, correlating them, features of the video and early popularity measures with the popularity trend and total observed popularity the video will experience. Our findings provide fundamental knowledge about popularity dynamics and its implications for services such as advertising and search.",
"title": ""
}
] |
[
{
"docid": "b7e28e79f938b617ba2e2ed7ef1bade3",
"text": "Computing in schools has gained momentum in the last two years resulting in GCSEs in Computing and teachers looking to up skill from Digital Literacy (ICT). For many students the subject of computer science concerns software code but writing code can be challenging, due to specific requirements on syntax and spelling with new ways of thinking required. Not only do many undergraduate students lack these ways of thinking, but there is a general misrepresentation of computing in education. Were computing taught as a more serious subject like science and mathematics, public understanding of the complexities of computer systems would increase, enabling those not directly involved with IT make better informed decisions and avoid incidents such as over budget and underperforming systems. We present our exploration into teaching a variety of computing skills, most significantly \"computational thinking\", to secondary-school age children through three very different engagements. First, we discuss Print craft, in which participants learn about computer-aided design and additive manufacturing by designing and building a miniature world from scratch using the popular open-world game Mine craft and 3D printers. Second, we look at how students can get a new perspective on familiar technology with a workshop using App Inventor, a graphical Android programming environment. Finally, we look at an ongoing after school robotics club where participants face a number of challenges of their own making as they design and create a variety of robots using a number of common tools such as Scratch and Arduino.",
"title": ""
},
{
"docid": "b200836d9046e79b61627122419d93c4",
"text": "Digital evidence plays a vital role in determining legal case admissibility in electronic- and cyber-oriented crimes. Considering the complicated level of the Internet of Things (IoT) technology, performing the needed forensic investigation will be definitely faced by a number of challenges and obstacles, especially in digital evidence acquisition and analysis phases. Based on the currently available network forensic methods and tools, the performance of IoT forensic will be producing a deteriorated digital evidence trail due to the sophisticated nature of IoT connectivity and data exchangeability via the “things”. In this paper, a revision of IoT digital evidence acquisition procedure is provided. In addition, an improved theoretical framework for IoT forensic model that copes with evidence acquisition issues is proposed and discussed.",
"title": ""
},
{
"docid": "9de0e4e9667745bddc2b1f5683b4a6cb",
"text": "Electronic textile (e-textile) toolkits have been successful in broadening participation in STEAM-related activities, in expanding perceptions of computing, and in engaging users in creative, expressive, and meaningful digital-physical design. While a range of well-designed e-textile toolkits exist (e.g., LilyPad), they cater primarily to adults and older children and have a high barrier of entry for some users. We are investigating new approaches to support younger children (K-4) in the creative design, play, and customization of e-textiles and wearables without requiring the creation of code. This demo paper presents one such example of ongoing work: MakerShoe, an e-textile platform for designing shoe-based interactive wearable experiences. We discuss our two participatory design sessions as well as our initial prototype, which uses single-function magnetically attachable electronic modules to support circuit creation and the design of responsive, interactive behaviors.",
"title": ""
},
{
"docid": "83413682f018ae5aec9ec415679de940",
"text": "An 18-year-old female patient arrived at the emergency department complaining of abdominal pain and fullness after a heavy meal. Physical examination revealed she was filthy and cover in feces, and she experienced severe abdominal distension. She died in ED and a diagnostic autopsy examination was requested. At external examination, the pathologist observed a significant dilation of the anal sphincter and suspected sexual assault, thus alerting the Judicial Authority who assigned the case to our department for a forensic autopsy. During the autopsy, we observed anal orifice expansion without signs of violence; food was found in the pleural cavity. The stomach was hyper-distended and perforated at three different points as well as the diaphragm. The patient was suffering from anorexia nervosa with episodes of overeating followed by manual voiding of her feces from the anal cavity (thus explaining the anal dilatation). The forensic pathologists closed the case as an accidental death.",
"title": ""
},
{
"docid": "0a4471110d3a5a0dc66a6acf95d7f306",
"text": "Acute epididymitis represents a common medical condition in the urological outpatient clinic. Mostly, epididymitis is caused by bacterial ascent through the urogenital tract, with pathogens originating either from sexually transmitted diseases or urinary tract infections. Although conservative antimicrobial therapy is possible in the majority of patients and is usually sufficient to eradicate the pathogen, studies have shown persistent oligozoospermia and azoospermia in up to 40% of these patients. Animal models of epididymitis are created to delineate the underlying reasons for this observation and the additional impairment of sperm function that is often associated with the disease. Accumulated data provide evidence of a differential expression of immune cells, immunoregulatory genes and pathogen-sensing molecules along the length of the epididymal duct. The evidence suggests that a tolerogenic environment exists in the caput epididymidis, but that inflammatory responses are most intense toward the cauda epididymidis. This is consistent with the need to provide protection for the neo-antigens of spermatozoa emerging from the testis, without compromising the ability to respond to ascending infections. However, severe inflammatory responses, particularly in the cauda, may lead to collateral damage to the structure and function of the epididymis. Convergence of the clinical observations with appropriate animal studies should lead to better understanding of the immunological environment throughout the epididymis, the parameters underlying susceptibility to epididymitis, and to therapeutic approaches that can mitigate epididymal damage and subsequent fertility problems.",
"title": ""
},
{
"docid": "c8a27aecd6f356bfdaeb7c33558843df",
"text": "Wireless communications today enables us to connect devices and people for an unprecedented exchange of multimedia and data content. The data rates of wireless communications continue to increase, mainly driven by innovation in electronics. Once the latency of communication systems becomes low enough to enable a round-trip delay from terminals through the network back to terminals of approximately 1 ms, an overlooked breakthrough?human tactile to visual feedback control?will change how humans communicate around the world. Using these controls, wireless communications can be the platform for enabling the control and direction of real and virtual objects in many situations of our life. Almost no area of the economy will be left untouched, as this new technology will change health care, mobility, education, manufacturing, smart grids, and much more. The Tactile Internet will become a driver for economic growth and innovation and will help bring a new level of sophistication to societies.",
"title": ""
},
{
"docid": "58917e3cbb1542185ac1af9edcf950eb",
"text": "The Energy Committee of the Royal Swedish Academy of Sciences has in a series of projects gathered information and knowledge on renewable energy from various sources, both within and outside the academic world. In this article, we synthesize and summarize some of the main points on renewable energy from the various Energy Committee projects and the Committee’s Energy 2050 symposium, regarding energy from water and wind, bioenergy, and solar energy. We further summarize the Energy Committee’s scenario estimates of future renewable energy contributions to the global energy system, and other presentations given at the Energy 2050 symposium. In general, international coordination and investment in energy research and development is crucial to enable future reliance on renewable energy sources with minimal fossil fuel use.",
"title": ""
},
{
"docid": "10b94bdea46ff663dd01291c5dac9e9f",
"text": "The notion of an instance is ubiquitous in knowledge representations for domain modeling. Most languages used for domain modeling offer syntactic or semantic restrictions on specific language constructs that distinguish individuals and classes in the application domain. The use, however, of instances and classes to represent domain entities has been driven by concerns that range from the strictly practical (e.g. the exploitation of inheritance) to the vaguely philosophical (e.g. intuitive notions of intension and extension). We demonstrate the importance of establishing a clear ontological distinction between instances and classes, and then show modeling scenarios where a single object may best be viewed as a class and an instance. To avoid ambiguous interpretations of such objects, it is necessary to introduce separate universes of discourse in which the same object exists in different forms. We show that a limited facility to support this notion exists in modeling languages like Smalltalk and CLOS, and argue that a more general facility should be made explicit in modeling languages.",
"title": ""
},
{
"docid": "208e606e98f4d2a59e1f5773adb7ca86",
"text": "Complex Event Processing (CEP) denotes algorithmic method s for making sense of events by deriving higher-level knowledge, or c omplex events, from lower-level events in a timely fashion and permanently. At t he core of CEP are queries continuously monitoring the incoming stream of “si mple” events and recognizing “complex” events from these simple events. Event q ueries monitoring incoming streams of simple events serve as specification of sit uations that manifest themselves as certain combinations of simple events occurr ing, or not occurring, over time and that cannot be detected solely from one or parts of the single events involved. Special purpose Event Query Languages (EQLs) have been deve lop d for the expression of the complex events in a convenient, concise, eff ctive and maintainable manner. This chapter identifies five language styles for CEP, namelycomposition operators, data stream query languages, production rules, timed state machines, and logic languages,describes their main traits, illustrates them on a sensor ne twork use case and discusses suitable application areas of ea ch language style.",
"title": ""
},
{
"docid": "662139f7eea01b66ba84aed668e9f76d",
"text": "Imbalance data are defined as a dataset whose proportion of classes is severely skewed. Classification performance of existing models tends to deteriorate due to class distribution imbalance. In addition, over-representation by majority classes prevents a classifier from paying attention tominority classes, which are generally more interesting. An effective ensemble classificationmethod called RHSBoost has been proposed to address the imbalance classification problem. This classification rule uses random undersampling and ROSE sampling under a boosting scheme. According to the experimental results, RHSBoost appears to be an attractive classification model for imbalance data. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "73e0ef5aa2eed22eb03d93d0ccfe5aed",
"text": "This article offers a formal account of curiosity and insight in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how people attain insight and understanding using just a handful of observations, which are solicited through curious behavior. We use simulations of abstract rule learning and approximate Bayesian inference to show that minimizing (expected) variational free energy leads to active sampling of novel contingencies. This epistemic behavior closes explanatory gaps in generative models of the world, thereby reducing uncertainty and satisfying curiosity. We then move from epistemic learning to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries (i.e., invariances or rules) in their generative models. The ensuing Bayesian model reduction evinces mechanisms associated with sleep and has all the hallmarks of “aha” moments. This formulation moves toward a computational account of consciousness in the pre-Cartesian sense of sharable knowledge (i.e., con: “together”; scire: “to know”).",
"title": ""
},
{
"docid": "0860b29f52d403a0ff728a3e356ec071",
"text": "Neuroanatomy has entered a new era, culminating in the search for the connectome, otherwise known as the brain's wiring diagram. While this approach has led to landmark discoveries in neuroscience, potential neurosurgical applications and collaborations have been lagging. In this article, the authors describe the ideas and concepts behind the connectome and its analysis with graph theory. Following this they then describe how to form a connectome using resting state functional MRI data as an example. Next they highlight selected insights into healthy brain function that have been derived from connectome analysis and illustrate how studies into normal development, cognitive function, and the effects of synthetic lesioning can be relevant to neurosurgery. Finally, they provide a précis of early applications of the connectome and related techniques to traumatic brain injury, functional neurosurgery, and neurooncology.",
"title": ""
},
{
"docid": "6707eb036c97e7bc9ea4416462a9ceaf",
"text": "Large networks are becoming a widely used abstraction for studying complex systems in a broad set of disciplines, ranging from social-network analysis to molecular biology and neuroscience. Despite an increasing need to analyze and manipulate large networks, only a limited number of tools are available for this task.\n Here, we describe the Stanford Network Analysis Platform (SNAP), a general-purpose, high-performance system that provides easy-to-use, high-level operations for analysis and manipulation of large networks. We present SNAP functionality, describe its implementational details, and give performance benchmarks. SNAP has been developed for single big-memory machines, and it balances the trade-off between maximum performance, compact in-memory graph representation, and the ability to handle dynamic graphs in which nodes and edges are being added or removed over time. SNAP can process massive networks with hundreds of millions of nodes and billions of edges. SNAP offers over 140 different graph algorithms that can efficiently manipulate large graphs, calculate structural properties, generate regular and random graphs, and handle attributes and metadata on nodes and edges. Besides being able to handle large graphs, an additional strength of SNAP is that networks and their attributes are fully dynamic; they can be modified during the computation at low cost. SNAP is provided as an open-source library in C++ as well as a module in Python.\n We also describe the Stanford Large Network Dataset, a set of social and information real-world networks and datasets, which we make publicly available. The collection is a complementary resource to our SNAP software and is widely used for development and benchmarking of graph analytics algorithms.",
"title": ""
},
{
"docid": "46410be2730753051c4cb919032fad6f",
"text": "categories. That is, since cue validity is the probability of being in some category given some property, this probability will increase (or at worst not decrease) as the size of the category increases (e.g. the probability of being an animal given the property of flying is greater than the probability of bird given flying, since there must be more animals that fly than birds that fly).6 The idea that cohesive categories maximize the probability of particular properties given the category fares no better. In this case, the most specific categories will always be picked out. Medin (1982) has analyzed a variety of formal measures of category cohe siveness and pointed out problems with all of them. For example, one possible principle is to have concepts such that they minimize the similarity between contrasting categories; but minimizing between-category similarity will always lead one to sort a set of n objects into exactly two categories. Similarly, functions based on maximizing within-category similarity while minimizing between-category similarity lead to a variety of problems and counterintuitive expectations about when to accept new members into existent categories versus when to set up new categories. At a less formal but still abstract level, Sternberg (1982) has tried to translate some of Goodman's (e.g. 1983) ideas about induction into possible constraints on natural concepts. Sternberg suggests that the apparent naturalness of a concept increases with the familiarity of the concept (where familiarity is related to Goodman's notion of entrenchment), and decreases with the number of transformations specified in the concept (e.g. aging specifies certain trans",
"title": ""
},
{
"docid": "dda2fdd40378ba3340354f836e6cd131",
"text": "Successful face analysis requires robust methods. It has been hard to compare the methods due to different experimental setups. We carried out a comparison study for the state-of-the-art gender classification methods to find out their actual reliability. The main contributions are comprehensive and comparable classification results for the gender classification methods combined with automatic real-time face detection and, in addition, with manual face normalization. We also experimented by combining gender classifier outputs arithmetically. This lead to increased classification accuracies. Furthermore, we contribute guidelines to carry out classification experiments, knowledge on the strengths and weaknesses of the gender classification methods, and two new variants of the known methods. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2432c853e62db6ffa27414a38218ffc0",
"text": "Imagine trying to stuff about 10,000 miles of spaghetti inside a basketball. Then, if that was not difficult enough, attempt to find a unique one inch segment of pasta from the middle of this mess, or try to duplicate, untangle and separate individual strings to opposite ends. This simple analogy illustrates some of the daunting tasks associated with the transcription, repair and replication of the nearly 2 meters of DNA that is packaged into the confines of a tiny eukaryotic nucleus. The solution to each of these problems lies in the assembly of the eukaryotic genome into chromatin, a structural polymer that not only solves the basic packaging problem, but also provides a dynamic platform that controls all DNA-mediated processes within the nucleus. The basic unit of chromatin is the nucleosome core particle, which contains 147 bp of DNA wrapped nearly twice around an octamer of the core histones. The histone octamer is composed of a central heterotetramer of histones H3 and H4, flanked by two heterodimers of histones H2A and H2B. Each nucleosome is separated by 10–60 bp of ‘linker’ DNA, and the resulting nucleosomal array constitutes a chromatin fiber of ~10 nm in diameter. This simple ‘beads-ona-string’ arrangement is folded into more condensed, ~30 nm thick fibers that are stabilized by binding of a linker histone to each nucleosome core (note that linker histones are not related in sequence to the core histones). Such 30 nm fibers are then further condensed in vivo to form 100–400 nm thick interphase fibers or the more highly compacted metaphase chromosome structures. This organization of DNA into chromatin fibers hinders its accessibility to proteins that must ‘read’ and/or copy the nucleotide base sequence, and consequently such structures must be dynamic and capable of regulated unfolding–folding transitions. Each of the core histones has a related globular domain that mediates histone–histone interactions within the octamer, and that organizes the two wraps of nucleosomal DNA. Each histone also harbors an aminoterminal 20–35 residue segment that is rich in basic amino acids and extends from the surface of the nucleosome; histone H2A is unique in having an additional ~37 amino acid carboxy-terminal domain that protrudes from the nucleosome. These histone ‘tails’ do not contribute significantly to the structure of individual nucleosomes nor to their stability, but they do play an essential role in controlling the folding of nucleosomal arrays into higherorder structures. Indeed, in vitro removal of the histone tails results in nucleosomal arrays that cannot condense past the beads-on-astring 10 nm fiber. Although the highly basic histone tails are generally viewed as DNA-binding modules, their essential roles in tail-mediated chromatin folding also involve inter-nucleosomal histone–histone interactions.",
"title": ""
},
{
"docid": "76e374d5a1e71822e1d72632136ad9f2",
"text": "This paper proposes two novel broadband microstrip antennas using coplanar feed-line. By feeding the patch with a suitable shape of the coplanar line in the slot of the patch, the broadband character is achieved. Compared with the antenna fed by a U-shaped feed-line, the antenna with L-shaped feed-line not only has wider bandwidth but also achieves the circular polarization character. The measured bandwidths of 25% and 34% are achieved, and both of the antennas have good radiation characteristics in the work band.",
"title": ""
},
{
"docid": "35260e253551bcfd21ce6d08c707f092",
"text": "Current debugging and optimization methods scale poorly to deal with the complexity of modern Internet services, in which a single request triggers parallel execution of numerous heterogeneous software components over a distributed set of computers. The Achilles’ heel of current methods is the need for a complete and accurate model of the system under observation: producing such a model is challenging because it requires either assimilating the collective knowledge of hundreds of programmers responsible for the individual components or restricting the ways in which components interact. Fortunately, the scale of modern Internet services offers a compensating benefit: the sheer volume of requests serviced means that, even at low sampling rates, one can gather a tremendous amount of empirical performance observations and apply “big data” techniques to analyze those observations. In this paper, we show how one can automatically construct a model of request execution from pre-existing component logs by generating a large number of potential hypotheses about program behavior and rejecting hypotheses contradicted by the empirical observations. We also show how one can validate potential performance improvements without costly implementation effort by leveraging the variation in component behavior that arises naturally over large numbers of requests to measure the impact of optimizing individual components or changing scheduling behavior. We validate our methodology by analyzing performance traces of over 1.3 million requests to Facebook servers. We present a detailed study of the factors that affect the end-to-end latency of such requests. We also use our methodology to suggest and validate a scheduling optimization for improving Facebook request latency.",
"title": ""
},
{
"docid": "feb2106f727966ab310211774c307fe5",
"text": "Bilious vomiting in newborns is an urgent condition that requires the immediate involvement of a team of pediatric surgeons and neonatologists for perioperative management. However, initial detection, evaluation and treatment are often performed by nurses, family physicians and general pediatricians. Bilious vomiting, with or without abdominal distention, is an initial sign of intestinal obstruction in newborns. A naso- or orogastric tube should be placed immediately to decompress the stomach. Physical examination should be followed by plain abdominal films. Dilated bowel loops and air-fluid levels suggest surgical obstruction. Contrast radiography may be required. Duodenal atresia, midgut malrotation and volvulus, jejunoileal atresia, meconium ileus and necrotizing enterocolitis are the most common causes of neonatal intestinal obstruction.",
"title": ""
},
{
"docid": "0fa35886300345106390cc55c6025257",
"text": "Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking — a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems.",
"title": ""
}
] |
scidocsrr
|
2f19ec67587bece054ffe6404d76fba0
|
A visualization tool for evaluating access control policies in facebook-style social network systems
|
[
{
"docid": "93e2a4357573c446b2747f7b21d9d443",
"text": "Social Network Systems pioneer a paradigm of access control that is distinct from traditional approaches to access control. Gates coined the term Relationship-Based Access Control (ReBAC) to refer to this paradigm. ReBAC is characterized by the explicit tracking of interpersonal relationships between users, and the expression of access control policies in terms of these relationships. This work explores what it takes to widen the applicability of ReBAC to application domains other than social computing. To this end, we formulate an archetypical ReBAC model to capture the essence of the paradigm, that is, authorization decisions are based on the relationship between the resource owner and the resource accessor in a social network maintained by the protection system. A novelty of the model is that it captures the contextual nature of relationships. We devise a policy language, based on modal logic, for composing access control policies that support delegation of trust. We use a case study in the domain of Electronic Health Records to demonstrate the utility of our model and its policy language. This work provides initial evidence to the feasibility and utility of ReBAC as a general-purpose paradigm of access control.",
"title": ""
}
] |
[
{
"docid": "c87e46e7221fb9b8486317cd2c3d4774",
"text": "A microprocessor-controlled automatic cluttercancellation subsystem, consisting of a programmable microwave attenuator and a programmable microwave phase-shifter controlled by a microprocessor-based control unit, has been developed for a microwave life-detection system (L-band 2 GHz or X-band 10 GHz) which can remotely sense breathing and heartbeat movements of living subjects. This automatic cluttercancellation subsystem has drastically improved a very slow p~ocess .of manual clutter-cancellation adjustment in our preVIOU.S mlcro~av.e sys~em. ~his is very important for some potential applications mcludmg location of earthquake or avalanche-trapped victims through rubble. A series of experiments have been conducted to demonstrate the applicability of this microwave life-detection system for rescue purposes. The automatic clutter-canceler may also have a potential application in some CW radar systems.",
"title": ""
},
{
"docid": "903d00a02846450ebd18a8ce865889b5",
"text": "The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step endto-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language. In the first step, a question formulated in natural language is analysed and transformed into a highlevel model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-toend evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).",
"title": ""
},
{
"docid": "33e03ac5663f72166e17d76861fb69c7",
"text": "The critical-period hypothesis for second-language acquisition was tested on data from the 1990 U.S. Census using responses from 2.3 million immigrants with Spanish or Chinese language backgrounds. The analyses tested a key prediction of the hypothesis, namely, that the line regressing second-language attainment on age of immigration would be markedly different on either side of the critical-age point. Predictions tested were that there would be a difference in slope, a difference in the mean while controlling for slope, or both. The results showed large linear effects for level of education and for age of immigration, but a negligible amount of additional variance was accounted for when the parameters for difference in slope and difference in means were estimated. Thus, the pattern of decline in second-language acquisition failed to produce the discontinuity that is an essential hallmark of a critical period.",
"title": ""
},
{
"docid": "a9d94467bbcb01a84c84fa5c8981076f",
"text": "Gavilea australis is a terrestrial orchid endemic from insular south Argentina and Chile. Meeting aspects of mycorrhizal fungi identity and compatibility in this orchid species is essential for propagation and conservation purposes. These knowledge represent also a first approach to elucidate the mycorrhizal specificity of this species. In order to evaluate both the mycorrhizal compatibility and the symbiotic seed germination of G. australis, we isolated and identified its root endophytic fungal strains as well as those from two sympatric species: Gavilea lutea and Codonorchis lessonii. In addition, we tested two other strains isolated from allopatric terrestrial orchid species from central Argentina. All fungal strains formed coilings and pelotons inside protocorms and promoted, at varying degrees, seed germination, and protocorm development until seedlings had two to three leaves. These results suggest a low mycorrhizal specificity of G. australis and contribute to a better knowledge of the biology of this orchid as well as of other sympatric Patagonian orchid species, all of them currently under serious risk of extinction.",
"title": ""
},
{
"docid": "f4c2a00b8a602203c86eaebc6f111f46",
"text": "Tamara Kulesa: Hello. This is Tamara Kulesa, Worldwide Marketing Manager for IBM Global Business Services for the Global Government Industry. I am here today with Susanne Dirks, Manager of the IBM Institute for Business Values Global Center for Economic Development in Ireland. Susanne is responsible for the research and writing of the newly published report, \"A Vision of Smarter Cities: How Cities Can Lead the Way into a Prosperous and Sustainable Future.\" Susanne, thank you for joining me today.",
"title": ""
},
{
"docid": "308933cb94f37ec511bf7e0838ad0996",
"text": "The original chi-square test, often known as Pearson’s chi-square, dates from papers by Karl Pearson in the earlier 1900s. The test serves both as a ”goodnessof-fit” test, where the data are categorized along one dimension, and as a test for the more common ”contingency table”, in which categorization is across two or more dimensions. Voinov and Nikulin, this volume, discuss the controversy over the correct form for the goodness of fit test. This entry will focus on the lack of agreement about tests on contingency tables. In 2000 the Vermont State legislature approved a bill authorizing civil unions. The vote can be broken down by gender to produce the following table, with the expected frequencies given in parentheses. The expected frequencies are computed as Ri × Cj/N, where Ri and Cj represent row and column marginal totals and N is the grand total.",
"title": ""
},
{
"docid": "af752d0de962449acd9a22608bd7baba",
"text": "Ð R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.",
"title": ""
},
{
"docid": "91d0f12e9303b93521146d4d650a63df",
"text": "We utilize the state-of-the-art in deep learning to show that we can learn by example what constitutes humor in the context of a Yelp review. To the best of the authors knowledge, no systematic study of deep learning for humor exists – thus, we construct a scaffolded study. First, we use “shallow” methods such as Random Forests and Linear Discriminants built on top of bag-of-words and word vector features. Then, we build deep feedforward networks on top of these features – in some sense, measuring how much of an effect basic feedforward nets help. Then, we use recurrent neural networks and convolutional neural networks to more accurately model the sequential nature of a review.",
"title": ""
},
{
"docid": "a41444799f295e5fc325626fd663d77d",
"text": "Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure.",
"title": ""
},
{
"docid": "59608978a30fcf6fc8bc0b92982abe69",
"text": "The self-advocacy movement (Dybwad & Bersani, 1996) grew out of resistance to oppressive practices of institutionalization (and worse) for people with cognitive disabilities. Moving beyond the worst abuses, people with cognitive disabilities seek as full participation in society as possible.",
"title": ""
},
{
"docid": "f1fe8a9d2e4886f040b494d76bc4bb78",
"text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.",
"title": ""
},
{
"docid": "340dd41b4236285433403da3eb99ee08",
"text": "Gut microbiota is an assortment of microorganisms inhabiting the length and width of the mammalian gastrointestinal tract. The composition of this microbial community is host specific, evolving throughout an individual's lifetime and susceptible to both exogenous and endogenous modifications. Recent renewed interest in the structure and function of this \"organ\" has illuminated its central position in health and disease. The microbiota is intimately involved in numerous aspects of normal host physiology, from nutritional status to behavior and stress response. Additionally, they can be a central or a contributing cause of many diseases, affecting both near and far organ systems. The overall balance in the composition of the gut microbial community, as well as the presence or absence of key species capable of effecting specific responses, is important in ensuring homeostasis or lack thereof at the intestinal mucosa and beyond. The mechanisms through which microbiota exerts its beneficial or detrimental influences remain largely undefined, but include elaboration of signaling molecules and recognition of bacterial epitopes by both intestinal epithelial and mucosal immune cells. The advances in modeling and analysis of gut microbiota will further our knowledge of their role in health and disease, allowing customization of existing and future therapeutic and prophylactic modalities.",
"title": ""
},
{
"docid": "a5cd7d46dc74d15344e2f3e9b79388a3",
"text": "A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.",
"title": ""
},
{
"docid": "252f4bcaeb5612a3018578ec2008dd71",
"text": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/ .",
"title": ""
},
{
"docid": "6fe0c00d138165bbd3153c0cc4539c55",
"text": "A key skill for mobile robots is the ability to navigate e ciently through their environment. In the case of social or assistive robots, this involves navigating through human crowds. Typical performance criteria, such as reaching the goal using the shortest path, are not appropriate in such environments, where it is more important for the robot to move in a socially adaptive manner such as respecting comfort zones of the pedestrians. We propose a framework for socially adaptive path planning in dynamic environments, by generating human-like path trajectory. Our framework consists of three modules: a feature extraction module, Inverse Reinforcement Learning module, and a path planning module. The feature extraction module extracts features necessary to characterize the state information, such as density and velocity of surrounding obstacles, from a RGB-Depth sensor. The Inverse Reinforcement Learning module uses a set of demonstration trajectories generated by an expert to learn the expert’s behaviour when faced with di↵erent state features, and represent it as a cost function that respects social variables. Finally, the planning module integrates a threelayer architecture, where a global path is optimized according to a classical shortest-path objective using a global map known a priori, a local path is planned over a shorter distance using the features extracted from a RGB-D sensor and the cost function inferred from Inverse Reinforcement Learning module, and a low-level Beomjoon Kim E-mail: beomjoon.kim@mail.mcgill.ca Joelle Pineau School of Computer Science, McGill University, 3480 University, Canada Tel.: 514-398-5432 Fax: 514-398-3883 E-mail: jpineau@cs.mcgill.ca system handles avoidance of immediate obstacles. We evaluate our approach by deploying it on a real robotic wheelchair platform in various scenarios, and comparing the robot trajectories to human trajectories.",
"title": ""
},
{
"docid": "5d77b360ec198b4ab60923236d348f53",
"text": "'Transfers of meaning' are linguistic mechanisms that make it possible to use the same expression to refer to disjoint sorts of things. Here 1 discuss predicate transfer, an operation that takes names of properties into new names that denote properties to which they functionally correspond. It is this operation that is responsible for the new meaning of the predicate parked out back in the utterance 'I am parked out back', as well as for the lexical alternations that figure in systematic polysemy. Predicate transfer is subject to two general conditions, which require that basic and derived property stand in a functional correspondence and that the derived property should be a 'noteworthy' feature of its bearer. I argue that by appealing to predicate transfer we can maintain a very strict definition of syntactic identity, which rules out all cases of'sortal crossing', in which a term appears to refer to things of two sorts at the same time, as in examples like Ringo squeezed himself into a tight space; in such a case, the reflexive is strictly preferential with its antecedent. 1 I N T R O D U C T I O N By 'transfers of meaning' I mean the ensemble of productive linguistic processes that enable us to use the same expression to refer to what are intuitively distinct sorts of categories of things. Broadly speaking, transfers involve all the figures that traditional rhetoric describes as metaphors, synesthesias, metonymies, and synecdoches, in all their synchronic manifestations. The difference is that transfers are linguistic processes, whereas the rhetorical figures are defined and classified according to the independent conceptual relations that they exploit. The difference between metaphors and metonymies, for example, is that the first presupposes a resemblance and the second a contiguity. And we can go on to classify figures according to the particular conceptual schemas they rest on, either as general correspondences like 'abstract for concrete', 'part for whole', and 'animate for inanimate', or as more specific schemas like 'life is a journey' or 'polities are bodies'. But taken by themselves, these schemas and principles aren't sufficient to explain the linguistic phenomena of transfer. Granted that there is a salient correspondence between monarchs and crowns, for example, it still has to be explained why the word croum can be used to refer to monarchs—or for that matter why this fact n o Transfers of Meaning should have any linguistic consequences at all. For this we have to look to specifically linguistic mechanisms, which is what I will be talking about here. These mechanisms exist in the service of the expression of conceptual regularities, but they are in principle independent of them, and are constrained in ways that don't permit a purely pragmatic explanation. They are the linguistic handmaidens of figuration, but each is specialized in her offices. 2 MECHANISMS OF TRANSFER The easiest way to appreciate the difference between rhetorical figures and the linguistic mechanisms is to consider how we can exploit the same sorts of correspondences among things in the world to effect two different kinds of transfer. A customer hands his key to an attendant at a parking lot and says either (i) or (2): 1. This is parked out back. 2. I am parked out back. Both these utterances involve metonymies. In (1), for example, we would be inclined to say that the subject refers not to the key that the speaker is holding, but to the car that the key goes with. And in fact all the linguistic evidence supports this analysis. For example, the number of the demonstrative is determined by the intended referent, not the demonstratum. So even if the customer is holding up several keys that fit a single car, he would say 'This is parked out back', whereas if he's holding up a single key that fits several cars, he would say, 'These are parked out back'. We can make the same point looking at languages that mark demonstratives and adjectives for grammatical gender. In Italian, for example, the word for key is feminine, la chiave, and the word for truck is masculine, il camion. And if a customer gives the attendant the key to a truck it will be the referent, not the demonstratum, that determines the gender of the demonstrative and the adjective for 'parked', as in (3): 3. Holding up a key (la chiave, fern, sg.) to refer to a truck (il camion, masc.) Questo (masc. sg.) e parcheggiato (masc. sg.) in dietro. 'This (masc.) is parked (masc.) in back.' One final example to the same effect: we can conjoin another predicate that describes the car, but not a predicate that describes the key: 4. This is parked out back and may not start. 5. ??This fits only the left front door and is parked out back. So there's every reason for saying that the subject of sentences like these refers to the car. Geoffrey Nunberg 111 But what of an utterance like (2), 'I'm parked out back'? This too is plainly a metonymy of some sort, and there may be a temptation to analyze it as we would (1), saying that the subject of the sentence refers not to the speaker, but to the speaker's car. But the tests we have used to validate this analysis for the demonstrative in (1) give a different answer here. For example, if the speaker has two cars he wouldn't say: 6. We are parked out back. (though of course this would be an appropriate utterance if there were two people who were waiting for the car). By the same token, an Italian man who was waiting for his car would express this by using a masculine adjective parcheggiato for 'parked', even though the word for 'car' is feminine, la macchina. 7. Io sono parcheggiato (*parcheggiata, fern, sg.) dietro. And in this case, we can conjoin any other predicate that describes the speaker, but not always the one that describes the car: 8. I am parked out back and have been waiting for 15 minutes. 9. *I am parked out back and may not start. The conclusion is that the subject of (1) refers to the speaker, and the transfer involves the predicate. That is, the predicate parked out back contributes a property of persons, the property they possess in virtue of the locations of their cars. Now the difference between these examples clearly doesn't have anything to do with the kind of relations they exploit. In both cases we assume a correspondence between the things in one domain, the cars parked in various locations, and the things in another domain, keys or drivers as the case may be. But we can take semantic advantage of these correspondences in two different ways. Sentence (1) is a case of deferred ostension or deferred indexical reference, a process that allows a demonstrative or indexical to refer to an object that correspond in a certain way to the contextual element picked out by a demonstration or by the semantic character of the expression. In this connection, note that we can't get this kind of deferred reading when we use a description in place of a demonstrative, as in (10): 10. *The key I'm holding is parked out back. Whereas (2) exemplifies another kind of transfer process, which I'll call predicate transfer. The principle here is that the name of a property that applies to something in one domain can sometimes be used as the name of a property that applies to things in another domain, provided the two properties correspond in a certain way. And just to fill out the contrast here, note that unlike deferred ostension, predicate transfer is indifferent to how the bearer i i2 Transfers of Meaning of this new or derived property is referred to—by an indexical or description or whatever. For example, in this situation the parking lot manager could say to the attendant: 11. The man with the cigar (Mr. McDowell, etc.) is parked out back. What these examples show, then, is that unlike rhetorical classifications like metaphor and metonymy, the various mechanisms of transfer can't be distinguished simply by pointing at the types of correspondences they exploit. And for this reason the description of these mechanisms is fundamentally a linguistic problem, rather than a problem of conceptual analysis. That is, there is nothing we can learn about keys, drivers, or cars that will help us to explain the differences between examples like (i) and (2). In the rest of this paper I will be concentrating on just one of these mechanisms, predicate transfer. In this section I will schematize the conditions that license this operation. In the following sections I will spell out the role of predicate transfer in lexical polysemy, and then discuss its implication for some well-known syntactic puzzles. Finally, I will talk about some of the methodological difficulties that predicate transfer raises. 3 CONDITIONS ON PREDICATE TRANSFER Predicate transfer is subject to two conditions. The first of these I have already mentioned: the property denoted by the derived predicate has to correspond in a certain way to the property denoted by the original predicate. With an utterance like (2), for example, we begin with a functional correspondence between the locations of cars in a lot and the properties of the owners, or, more accurately, the 'havers', of these cars. When two property domains correspond in an interesting or useful way—of which more in a moment—we can schematize the operation of predicate transfer as follows: 12. Condition on predicate transfer Let & and &' be sets of properties that are related by a salient transfer function gt: & — &' Then if F is a predicate that denotes a property P e 0>, diere is also a predicate F', spelt like F, that denotes the property P', where P' gt(P). 6 In the cases we have been talking about, of course, these correspondences between properties are mediated by correspondences between their bearers— e.g. the functional relation from cars to their owners—and we might want to represent this directly. So let h be a salient function from a set of things A to another (disjoint) set of things B. Then for any predicate F t",
"title": ""
},
{
"docid": "e2c4c7e45080c9eb6f99be047ee65958",
"text": "This paper describes the current state of mu.semte.ch, a platform for building state-of-the-art web applications fuelled by Linked Data aware microservices. The platform assumes a mashup-like construction of single page web applications which consume various services. In order to reuse tooling built in the community, Linked Data is not pushed to the frontend.",
"title": ""
},
{
"docid": "edc578384d991eefa0929a1f41cfda4b",
"text": "This paper investigates the use of additive layer manufacturing (ALM) for waveguide components based on two Ku-band sidearm orthomode transducers (OMT). The advantages and disadvantages of the ALM manufacturing regarding RF waveguide components are discussed and measurement results are compared to those of an equal OMT manufactured by conventional techniques. The paper concludes with an outlook to the capability of advanced manufacturing techniques for RF space applications as well as ongoing development activities.",
"title": ""
},
{
"docid": "84647b51dbbe755534e1521d9d9cf843",
"text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>",
"title": ""
}
] |
scidocsrr
|
47b77866a34c0546dab1117659995ea4
|
Triply Supervised Decoder Networks for Joint Detection and Segmentation
|
[
{
"docid": "b4ed15850674851fb7e479b7181751d7",
"text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"title": ""
}
] |
[
{
"docid": "8b11c5c6b134576d8ce7ce3484e17822",
"text": "The popularity and complexity of online social networks (OSNs) continues to grow unabatedly with the most popular applications featuring hundreds of millions of active users. Ranging from social communities and discussion groups, to recommendation engines, tagging systems, mobile social networks, games, and virtual worlds, OSN applications have not only shifted the focus of application developers to the human factor, but have also transformed traditional application paradigms such as the way users communicate and navigate in the Internet. Indeed, understanding user behavior is now an integral part of online services and applications, with system and algorithm design becoming in effect user-centric. As expected, this paradigm shift has not left the research community unaffected, triggering intense research interest in the analysis of the structure and properties of online communities.",
"title": ""
},
{
"docid": "62155b5b07bae430364847098943b331",
"text": "A planar printed antenna comprising a driven strip monopole and a parasitic shorted strip, both of comparable length and closely coupled to each other, suitable for eight-band LTE/GSM/UMTS operation in the mobile phone is presented. The proposed antenna is mainly configured along the boundary of the no-ground portion on the system circuit board of the mobile phone to achieve a simple and compact structure. Also, the edge of the no-ground portion facing the system ground plane on the circuit board is not necessarily a straight line, leading to more degrees of freedom in allocating the required no-ground portion on the circuit board for printing the antenna. The driven strip monopole and the parasitic shorted strip both contribute their lowest and higher-order resonant modes to form two wide operating bands centered at about 830 and 2200 MHz to respectively cover the LTE700/GSM850/900 operation (698-960 MHz) and GSM1800/ 1900/UMTS/LTE2300/2500 operation (1710-2690 MHz).",
"title": ""
},
{
"docid": "955bd83f9135336d9c5d887065d31f04",
"text": "Current dialogue systems focus more on textual and speech context knowledge and are usually based on two speakers. Some recent work has investigated static image-based dialogue. However, several real-world human interactions also involve dynamic visual context (similar to videos) as well as dialogue exchanges among multiple speakers. To move closer towards such multimodal conversational skills and visually-situated applications, we introduce a new video-context, many-speaker dialogue dataset based on livebroadcast soccer game videos and chats from Twitch.tv. This challenging testbed allows us to develop visually-grounded dialogue models that should generate relevant temporal and spatial event language from the live video, while also being relevant to the chat history. For strong baselines, we also present several discriminative and generative models, e.g., based on tridirectional attention flow (TriDAF). We evaluate these models via retrieval ranking-recall, automatic phrasematching metrics, as well as human evaluation studies. We also present dataset analyses, model ablations, and visualizations to understand the contribution of different modalities and model components.",
"title": ""
},
{
"docid": "29f1144b4f3203bab29d7cb6b24fd065",
"text": "Virtual reality (VR)systems let users intuitively interact with 3D environments and have been used extensively for robotic teleoperation tasks. While more immersive than their 2D counterparts, early VR systems were expensive and required specialized hardware. Fortunately, there has been a recent proliferation of consumer-grade VR systems at affordable price points. These systems are inexpensive, relatively portable, and can be integrated into existing robotic frameworks. Our group has designed a VR teleoperation package for the Robot Operating System (ROS), ROS Reality, that can be easily integrated into such frameworks. ROS Reality is an open-source, over-the-Internet teleoperation interface between any ROS-enabled robot and any Unity-compatible VR headset. We completed a pilot study to test the efficacy of our system, with expert human users controlling a Baxter robot via ROS Reality to complete 24 dexterous manipulation tasks, compared to the same users controlling the robot via direct kinesthetic handling. This study provides insight into the feasibility of robotic teleoperation tasks in VR with current consumer-grade resources and exposes issues that need to be addressed in these VR systems. In addition, this paper presents a description of ROS Reality, its components, and architecture. We hope this system will be adopted by other research groups to allow for easy integration of VR teleoperated robots into future experiments.",
"title": ""
},
{
"docid": "c4fcf61b8f1313f81a738be3b631be34",
"text": "We study the behavior of the eigenvalues of a sublaplacian ∆b on a compact strictly pseudoconvex CR manifold M, as functions on the set P+ of positively oriented contact forms on M by endowing P+ with a natural metric topology.",
"title": ""
},
{
"docid": "357ae5590fb6f11fbd210baced2fc4ee",
"text": "To achieve the best results from an OCR system, the pre-processing steps must be performed with a high degree of accuracy and reliability. There are two critically important steps in the OCR pre-processing phase. First, blocks must be extracted from each page of the scanned document. Secondly, all blocks resulting from the first step must be arranged in the correct order. One of the most notable techniques for block ordering in the second step is the recursive x-y cut (RXYC) algorithm. This technique works accurately only when applied to documents with a simple page layout but it causes incorrect block ordering when applied to documents with complex page layouts. This paper proposes a modified recursive x-y cut algorithm for solving block ordering problems for documents with complex page layouts. This proposed algorithm can solve problems such as (1) the overlapping block problem; (2) the blocks overlay problem, and (3) the L-Shaped block problem.",
"title": ""
},
{
"docid": "150a6ff054746a7f42133527c13cb17c",
"text": "Motivated by the ongoing success of Linked Data and the growing amount of semantic data sources available on the Web, new challenges to query processing are emerging. Especially in distributed settings that require joining data provided by multiple sources, sophisticated optimization techniques are necessary for efficient query processing. We propose novel join processing and grouping techniques to minimize the number of remote requests, and develop an effective solution for source selection in the absence of preprocessed metadata. We present FedX, a practical framework that enables efficient SPARQL query processing on heterogeneous, virtually integrated Linked Data sources. In experiments, we demonstrate the practicability and efficiency of our framework on a set of real-world queries and data sources from the Linked Open Data cloud. With FedX we achieve a significant improvement in query performance over state-of-the-art federated query engines.",
"title": ""
},
{
"docid": "bd8ae67f959a7b840eff7e8c400a41e0",
"text": "Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.",
"title": ""
},
{
"docid": "66fa9b79b1034e1fa3bf19857b5367c2",
"text": "We propose a boundedly-rational model of opinion formation in which individuals are subject to persuasion bias; that is, they fail to account for possible repetition in the information they receive. We show that persuasion bias implies the phenomenon of social influence, whereby one’s influence on group opinions depends not only on accuracy, but also on how well-connected one is in the social network that determines communication. Persuasion bias also implies the phenomenon of unidimensional opinions; that is, individuals’ opinions over a multidimensional set of issues converge to a single “left-right” spectrum. We explore the implications of our model in several natural settings, including political science and marketing, and we obtain a number of novel empirical implications. DeMarzo and Zwiebel: Graduate School of Business, Stanford University, Stanford CA 94305, Vayanos: MIT Sloan School of Management, 50 Memorial Drive E52-437, Cambridge MA 02142. This paper is an extensive revision of our paper, “A Model of Persuasion – With Implication for Financial Markets,” (first draft, May 1997). We are grateful to Nick Barberis, Gary Becker, Jonathan Bendor, Larry Blume, Simon Board, Eddie Dekel, Stefano DellaVigna, Darrell Duffie, David Easley, Glenn Ellison, Simon Gervais, Ed Glaeser, Ken Judd, David Kreps, Edward Lazear, George Loewenstein, Lee Nelson, Anthony Neuberger, Matthew Rabin, José Scheinkman, Antoinette Schoar, Peter Sorenson, Pietro Veronesi, Richard Zeckhauser, three anonymous referees, and seminar participants at the American Finance Association Annual Meetings, Boston University, Cornell, Carnegie-Mellon, ESSEC, the European Summer Symposium in Financial Markets at Gerzensee, HEC, the Hoover Institution, Insead, MIT, the NBER Asset Pricing Conference, the Northwestern Theory Summer Workshop, NYU, the Stanford Institute for Theoretical Economics, Stanford, Texas A&M, UCLA, U.C. Berkeley, Université Libre de Bruxelles, University of Michigan, University of Texas at Austin, University of Tilburg, and the Utah Winter Finance Conference for helpful comments and discussions. All errors are our own.",
"title": ""
},
{
"docid": "c9b3ae68f83ea8fc09d0d2171931330c",
"text": "Although previous studies have concluded that Internet use can help students in learning and research, a number of empirical investigations have confirmed that Internet addiction or excessive Internet use has negative effect on students. Thus, if the Internet does not always benefit students, under which conditions can Internet use have positive effects? Since students’ beliefs in their academic self-efficacy and their abilities to begin, continue, and complete their studies are as important as their academic successes and performances, this study hypothesizes that academic self-efficacy acts as a mediator for Internet use and academic performance. Based on Social cognitive theory, we argue that student academic performance will be mediated by academic self-efficacy with respect to Internet use. Two kinds of Internet use, general and professional, are considered to be antecedents of academic self-efficacy. Survey data from 212 twelfth-grade vocational high school students in Taiwan indicate that general Internet use has an indirect positive effect on student academic performance, which is also mediated through academic self-efficacy. In contrast, general Internet use has no significant direct impact on students learning performance. This study also shows that Internet anxiety moderates the relationship between academic self-efficacy and learning performance. In students with low Internet anxiety, the relationship is moderated, which results in enhanced learning performance. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7e03d09882c7c8fcab5df7a6bd12764f",
"text": "This paper describes a background digital calibration technique based on bitwise correlation (BWC) to correct the capacitive digital-to-analog converter (DAC) mismatch error in successive-approximation-register (SAR) analog-to-digital converters (ADC's). Aided by a single-bit pseudorandom noise (PN) injected to the ADC input, the calibration engine extracts all bit weights simultaneously to facilitate a digital-domain correction. The analog overhead associated with this technique is negligible and the conversion speed is fully retained (in contrast to [1] in which the ADC throughput is halved). A prototype 12bit 50-MS/s SAR ADC fabricated in 90-nm CMOS measured a 66.5-dB peak SNDR and an 86.0-dB peak SFDR with calibration, while occupying 0.046 mm2 and dissipating 3.3 mW from a 1.2-V supply. The calibration logic is estimated to occupy 0.072 mm2 with a power consumption of 1.4 mW in the same process.",
"title": ""
},
{
"docid": "152e8e88e8f560737ec0c20ae9aa0335",
"text": "UNLABELLED\nDysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance.\n\n\nKEY PRACTITIONER MESSAGE\nThe addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance.",
"title": ""
},
{
"docid": "60a0c63f6c1166970d440c1302ca0dbe",
"text": "In vehicle routing problems with time windows (VRPTW), a set of vehicles with limits on capacity and travel time are available to service a set of customers with demands and earliest and latest time for servicing. The objective is to minimize the cost of servicing the set of customers without being tardy or exceeding the capacity or travel time of the vehicles. As finding a feasible solution to the problem is NP-complete, search methods based upon heuristics are most promising for problems of practical size. In this paper we describe GIDEON, a genetic algorithm heuristic for solving the VRPTW. GIDEON consists of a global customer clustering method and a local post-optimization method. The global customer clustering method uses an adaptive search strategy based upon population genetics, to assign vehicles to customers. The best solution obtained from the clustering method is improved by a local post-optimization method. The synergy a between global adaptive clustering method and a local route optimization method produce better results than those obtained by competing heuristic search methods. On a standard set of 56 VRPTW problems obtained from the literature the GIDEON system obtained 41 new best known solutions.",
"title": ""
},
{
"docid": "9d9665a21e5126ba98add5a832521cd1",
"text": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and to score, and less prone to overfitting.",
"title": ""
},
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
},
{
"docid": "f6264315a5bbf32b9fa21488b4c80f03",
"text": "into empirical, corpus-based learning approaches to natural language processing (NLP). Most empirical NLP work to date has focused on relatively low-level language processing such as part-ofspeech tagging, text segmentation, and syntactic parsing. The success of these approaches has stimulated research in using empirical learning techniques in other facets of NLP, including semantic analysis—uncovering the meaning of an utterance. This article is an introduction to some of the emerging research in the application of corpusbased learning techniques to problems in semantic interpretation. In particular, we focus on two important problems in semantic interpretation, namely, word-sense disambiguation and semantic parsing.",
"title": ""
},
{
"docid": "114381e33d6c08724057e3116952dafc",
"text": "We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. The platform contains a rich suite of low-level and high-level natural language processing technologies: automatic speech recognition of broadcast media, machine translation, automated tagging and classification of named entities, semantic parsing to detect relationships between entities, and automatic construction / augmentation of factual knowledge bases. Implemented on the Docker platform, it can easily be deployed, customised, and scaled to large volumes of incoming media streams.",
"title": ""
},
{
"docid": "c62acdb764816d43daa8a4c3e59815e9",
"text": "Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Network neuroscience proposes to tackle these enduring challenges. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. We review emerging trends in network neuroscience and attempt to chart a path toward a better understanding of the brain as a multiscale networked system.",
"title": ""
},
{
"docid": "e4944af5f589107d1b42a661458fcab5",
"text": "This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014. Mobile Augmented Reality has largely evolved over the last decade, as well as the interpretation itself of what is Mobile Augmented Reality. The first instance of Mobile AR can certainly be associated with the development of wearable AR, in a sense of experiencing AR during locomotion (mobile as a motion). With the transformation and miniaturization of physical devices and displays, the concept of mobile AR evolved towards the notion of ”mobile device”, aka AR on a mobile device. In this history of mobile AR we considered both definitions and the evolution of the term over time. Major parts of the list were initially compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2009 (author list in alphabetical order) for the ISMAR society. More recent work was added in 2013 and during preparation of this report. Permission is granted to copy and modify. Please email the first author if you find any errors.",
"title": ""
},
{
"docid": "7256d6c5bebac110734275d2f985ab31",
"text": "The location-based social networks (LBSN) enable users to check in their current location and share it with other users. The accumulated check-in data can be employed for the benefit of users by providing personalized recommendations. In this paper, we propose a context-aware location recommendation system for LBSNs using a random walk approach. Our proposed approach considers the current context (i.e., current social relations, personal preferences and current location) of the user to provide personalized recommendations. We build a graph model of LBSNs for performing a random walk approach with restart. Random walk is performed to calculate the recommendation probabilities of the nodes. A list of locations are recommended to users after ordering the nodes according to the estimated probabilities. We compare our algorithm, CLoRW, with popularity-based, friend-based and expert-based baselines, user-based collaborative filtering approach and a similar work in the literature. According to experimental results, our algorithm outperforms these approaches in all of the test cases.",
"title": ""
}
] |
scidocsrr
|
3565944ef240d2406c5c6fc3079a2caf
|
BA-Net: Dense Bundle Adjustment Network
|
[
{
"docid": "cd73d3acb274d179b52ec6930f6f26bd",
"text": "We present the design and implementation of new inexact Newton type Bundle Adjustment algorithms that exploit hardware parallelism for efficiently solving large scale 3D scene reconstruction problems. We explore the use of multicore CPU as well as multicore GPUs for this purpose. We show that overcoming the severe memory and bandwidth limitations of current generation GPUs not only leads to more space efficient algorithms, but also to surprising savings in runtime. Our CPU based system is up to ten times and our GPU based system is up to thirty times faster than the current state of the art methods [1], while maintaining comparable convergence behavior. The code and additional results are available at http://grail.cs. washington.edu/projects/mcba.",
"title": ""
},
{
"docid": "92cc028267bc3f8d44d11035a8212948",
"text": "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.",
"title": ""
},
{
"docid": "45d6863e54b343d7a081e79c84b81e65",
"text": "In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim",
"title": ""
}
] |
[
{
"docid": "56642ffad112346186a5c3f12133e59b",
"text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.",
"title": ""
},
{
"docid": "b915033fd3f8fdea3fc7bf9e3f95146d",
"text": "Software traceability is a required element in the development and certification of safety-critical software systems. However, trace links, which are created at significant cost and effort, are often underutilized in practice due primarily to the fact that project stakeholders often lack the skills needed to formulate complex trace queries. To mitigate this problem, we present a solution which transforms spoken or written natural language queries into structured query language (SQL). TiQi includes a general database query mechanism and a domain-specific model populated with trace query concepts, project-specific terminology, token disambiguators, and query transformation rules. We report results from four different experiments exploring user preferences for natural language queries, accuracy of the generated trace queries, efficacy of the underlying disambiguators, and stability of the trace query concepts. Experiments are conducted against two different datasets and show that users have a preference for written NL queries. Queries were transformed at accuracy rates ranging from 47 to 93 %.",
"title": ""
},
{
"docid": "3b2607bda35e535c2c4410e4c2b21a4f",
"text": "There has been recent interest in designing systems that use the tongue as an input interface. Prior work however either require surgical procedures or in-mouth sensor placements. In this paper, we introduce TongueSee, a non-intrusive tongue machine interface that can recognize a rich set of tongue gestures using electromyography (EMG) signals from the surface of the skin. We demonstrate the feasibility and robustness of TongueSee with experimental studies to classify six tongue gestures across eight participants. TongueSee achieves a classification accuracy of 94.17% and a false positive probability of 0.000358 per second using three-protrusion preamble design.",
"title": ""
},
{
"docid": "1f52dc0ee257b56b24c49b9520cf38da",
"text": "We extend approaches for skinning characters to the general setting of skinning deformable mesh animations. We provide an automatic algorithm for generating progressive skinning approximations, that is particularly efficient for pseudo-articulated motions. Our contributions include the use of nonparametric mean shift clustering of high-dimensional mesh rotation sequences to automatically identify statistically relevant bones, and robust least squares methods to determine bone transformations, bone-vertex influence sets, and vertex weight values. We use a low-rank data reduction model defined in the undeformed mesh configuration to provide progressive convergence with a fixed number of bones. We show that the resulting skinned animations enable efficient hardware rendering, rest pose editing, and deformable collision detection. Finally, we present numerous examples where skins were automatically generated using a single set of parameter values.",
"title": ""
},
{
"docid": "f14eeb6dff3f865bc65427210dd49aae",
"text": "Although the most intensively studied mammalian olfactory system is that of the mouse, in which olfactory chemical cues of one kind or another are detected in four different nasal areas [the main olfactory epithelium (MOE), the septal organ (SO), Grüneberg's ganglion, and the sensory epithelium of the vomeronasal organ (VNO)], the extraordinarily sensitive olfactory system of the dog is also an important model that is increasingly used, for example in genomic studies of species evolution. Here we describe the topography and extent of the main olfactory and vomeronasal sensory epithelia of the dog, and we report finding no structures equivalent to the Grüneberg ganglion and SO of the mouse. Since we examined adults, newborns, and fetuses we conclude that these latter structures are absent in dogs, possibly as the result of regression or involution. The absence of a vomeronasal component based on VR2 receptors suggests that the VNO may be undergoing a similar involutionary process.",
"title": ""
},
{
"docid": "c7a73ab57087752d50d79d38a84c0775",
"text": "In this paper, we address the problem of model-free online object tracking based on color representations. According to the findings of recent benchmark evaluations, such trackers often tend to drift towards regions which exhibit a similar appearance compared to the object of interest. To overcome this limitation, we propose an efficient discriminative object model which allows us to identify potentially distracting regions in advance. Furthermore, we exploit this knowledge to adapt the object representation beforehand so that distractors are suppressed and the risk of drifting is significantly reduced. We evaluate our approach on recent online tracking benchmark datasets demonstrating state-of-the-art results. In particular, our approach performs favorably both in terms of accuracy and robustness compared to recent tracking algorithms. Moreover, the proposed approach allows for an efficient implementation to enable online object tracking in real-time.",
"title": ""
},
{
"docid": "8a21ff7f3e4d73233208d5faa70eb7ce",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "641a51f9a5af9fc9dba4be3d12829fd5",
"text": "In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net) for recognising scene texts. The overall architecture of our STAR-Net is illustrated in fig. 1. Our STARNet emphasises the importance of representative image-based feature extraction from text regions by the spatial attention mechanism and the residue learning strategy. It is by far the deepest neural network proposed for scene text recognition.",
"title": ""
},
{
"docid": "f7b5312646e5a847a47c460619184d92",
"text": "Introduction to the Values Theory When we think of our values, we think of what is important to us in our lives (e.g., security, independence, wisdom, success, kindness, pleasure). Each of us holds numerous values with varying degrees of importance. A particular value may be very important to one person, but unimportant to another. Consensus regarding the most useful way to conceptualize basic values has emerged gradually since the 1950’s. We can summarize the main features of the conception of basic values implicit in the writings of many theorists and researchers as follows:",
"title": ""
},
{
"docid": "d302bfb7c2b95def93525050016ac07c",
"text": "Face recognition remains a challenge today as recognition performance is strongly affected by variability such as illumination, expressions and poses. In this work we apply Convolutional Neural Networks (CNNs) on the challenging task of both 2D and 3D face recognition. We constructed two CNN models, namely CNN-1 (two convolutional layers) and CNN-2 (one convolutional layer) for testing on 2D and 3D dataset. A comprehensive parametric study of two CNN models on face recognition is represented in which different combinations of activation function, learning rate and filter size are investigated. We find that CNN-2 has a better accuracy performance on both 2D and 3D face recognition. Our experimental results show that an accuracy of 85.15% was accomplished using CNN-2 on depth images with FRGCv2.0 dataset (4950 images with 557 objectives). An accuracy of 95% was achieved using CNN-2 on 2D raw image with the AT&T dataset (400 images with 40 objectives). The results indicate that the proposed CNN model is capable to handle complex information from facial images in different dimensions. These results provide valuable insights into further application of CNN on 3D face recognition.",
"title": ""
},
{
"docid": "5706118011df482fdd1e3690c638e963",
"text": "This paper proposes a novel approach for segmenting primary video objects by using Complementary Convolutional Neural Networks (CCNN) and neighborhood reversible flow. The proposed approach first pre-trains CCNN on massive images with manually annotated salient objects in an end-to-end manner, and the trained CCNN has two separate branches that simultaneously handle two complementary tasks, i.e., foregroundness and backgroundness estimation. By applying CCNN on each video frame, the spatial foregroundness and backgroundness maps can be initialized, which are then propagated between various frames so as to segment primary video objects and suppress distractors. To enforce efficient temporal propagation, we divide each frame into superpixels and construct neighborhood reversible flow that reflects the most reliable temporal correspondences between superpixels in far-away frames. Within such flow, the initialized foregroundness and backgroundness can be efficiently and accurately propagated along the temporal axis so that primary video objects gradually pop-out and distractors are well suppressed. Extensive experimental results on three video datasets show that the proposed approach achieves impressive performance in comparisons with 18 state-of-the-art models.",
"title": ""
},
{
"docid": "6b25852df72c26b1467d4c51213ca122",
"text": "This paper presents a study of spectral clustering-based approaches to acoustic segment modeling (ASM). ASM aims at finding the underlying phoneme-like speech units and building the corresponding acoustic models in the unsupervised setting, where no prior linguistic knowledge and manual transcriptions are available. A typical ASM process involves three stages, namely initial segmentation, segment labeling, and iterative modeling. This work focuses on the improvement of segment labeling. Specifically, we use posterior features as the segment representations, and apply spectral clustering algorithms on the posterior representations. We propose a Gaussian component clustering (GCC) approach and a segment clustering (SC) approach. GCC applies spectral clustering on a set of Gaussian components, and SC applies spectral clustering on a large number of speech segments. Moreover, to exploit the complementary information of different posterior representations, a multiview segment clustering (MSC) approach is proposed. MSC simultaneously utilizes multiple posterior representations to cluster speech segments. To address the computational problem of spectral clustering in dealing with large numbers of speech segments, we use inner product similarity graph and make reformulations to avoid the explicit computation of the affinity matrix and Laplacian matrix. We carried out two sets of experiments for evaluation. First, we evaluated the ASM accuracy on the OGI-MTS dataset, and it was shown that our approach could yield 18.7% relative purity improvement and 15.1% relative NMI improvement compared with the baseline approach. Second, we examined the performances of our approaches in the real application of zero-resource query-by-example spoken term detection on SWS2012 dataset, and it was shown that our approaches could provide consistent improvement on four different testing scenarios with three evaluation metrics.",
"title": ""
},
{
"docid": "9d7623afe7b3ef98f81e1de0f2f2806d",
"text": "The fashion industry faces the increasing complexity of its activities such as the globalization of the market, the proliferation of information, the reduced time to market, the increasing distance between industrial partners and pressures related to costs. Digital prototype in the textile and clothing industry enables technologies in the process of product development where various operators are involved in the different stages, with various skills and competencies, and different necessity of formalizing and defining in a deterministic way the result of their activities. Taking into account the recent trends in the industry, the product development cycle and the use of new digital technologies cannot be restricted in the “typical cycle” but additional tools and skills are required to be integrated taking into account these developments [1].",
"title": ""
},
{
"docid": "19c24a77726f9095e53ae792556c2a30",
"text": "and Applied Analysis 3 The addition and scalar multiplication of fuzzy number in E are defined as follows: (1) ?̃? ⊕ Ṽ = (?̃? + Ṽ, ?̃? + Ṽ) ,",
"title": ""
},
{
"docid": "113cf34bf2a86a8f1a041cfd366c00b7",
"text": "People perceive and conceive of activity in terms of discrete events. Here the authors propose a theory according to which the perception of boundaries between events arises from ongoing perceptual processing and regulates attention and memory. Perceptual systems continuously make predictions about what will happen next. When transient errors in predictions arise, an event boundary is perceived. According to the theory, the perception of events depends on both sensory cues and knowledge structures that represent previously learned information about event parts and inferences about actors' goals and plans. Neurological and neurophysiological data suggest that representations of events may be implemented by structures in the lateral prefrontal cortex and that perceptual prediction error is calculated and evaluated by a processing pathway, including the anterior cingulate cortex and subcortical neuromodulatory systems.",
"title": ""
},
{
"docid": "cd4e2e3af17cd84d4ede35807e71e783",
"text": "A proposal for saliency computation within the visual cortex is put forth based on the premise that localized saliency computation serves to maximize information sampled from one's environment. The model is built entirely on computational constraints but nevertheless results in an architecture with cells and connectivity reminiscent of that appearing in the visual cortex. It is demonstrated that a variety of visual search behaviors appear as emergent properties of the model and therefore basic principles of coding and information transmission. Experimental results demonstrate greater efficacy in predicting fixation patterns across two different data sets as compared with competing models.",
"title": ""
},
{
"docid": "30a0b6c800056408b32e9ed013565ae0",
"text": "This case report presents the successful use of palatal mini-implants for rapid maxillary expansion and mandibular distalization in a skeletal Class III malocclusion. The patient was a 13-year-old girl with the chief complaint of facial asymmetry and a protruded chin. Camouflage orthodontic treatment was chosen, acknowledging the possibility of need for orthognathic surgery after completion of her growth. A bone-borne rapid expander (BBRME) was used to correct the transverse discrepancy and was then used as indirect anchorage for distalization of the lower dentition with Class III elastics. As a result, a Class I occlusion with favorable inclination of the upper teeth was achieved without any adverse effects. The total treatment period was 25 months. Therefore, BBRME can be considered an alternative treatment in skeletal Class III malocclusion.",
"title": ""
},
{
"docid": "ea29b3421c36178680ae63c16b9cecad",
"text": "Traffic engineering under OSPF routes along the shortest paths, which may cause network congestion. Software Defined Networking (SDN) is an emerging network architecture which exerts a separation between the control plane and the data plane. The SDN controller can centrally control the network state through modifying the flow tables maintained by routers. Network operators can flexibly split arbitrary flows to outgoing links through the deployment of the SDN. However, SDN has its own challenges of full deployment, which makes the full deployment of SDN difficult in the short term. In this paper, we explore the traffic engineering in a SDN/OSPF hybrid network. In our scenario, the OSPF weights and flow splitting ratio of the SDN nodes can both be changed. The controller can arbitrarily split the flows coming into the SDN nodes. The regular nodes still run OSPF. Our contribution is that we propose a novel algorithm called SOTE that can obtain a lower maximum link utilization. We reap a greater benefit compared with the results of the OSPF network and the SDN/OSPF hybrid network with fixed weight setting. We also find that when only 30% of the SDN nodes are deployed, we can obtain a near optimal performance.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "44cad643330467a07beb81ce22d86371",
"text": "Distributed ledger technologies are rising in popularity, mainly for the host of financial applications they potentially enable, through smart contracts. Several implementations of distributed ledgers have been proposed, and different languages for the development of smart contracts have been suggested. A great deal of attention is given to the practice of development, i.e. programming, of smart contracts. In this position paper, we argue that more attention should be given to the “traditional developers” of contracts, namely the lawyers, and we propose a list of requirements for a human and machine-readable contract authoring language, friendly to lawyers, serving as a common (and a specification) language, for programmers, and the parties to a contract.",
"title": ""
}
] |
scidocsrr
|
1abddf213d1ae06fcae75753fa1ed0b6
|
Continuous Authentication on Mobile Devices by Analysis of Typing Motion Behavior
|
[
{
"docid": "62eaac4d22c2bc278f411761fc3d493f",
"text": "Smartphone users have their own unique behavioral patterns when tapping on the touch screens. These personal patterns are reflected on the different rhythm, strength, and angle preferences of the applied force. Since smart phones are equipped with various sensors like accelerometer, gyroscope, and touch screen sensors, capturing a user's tapping behaviors can be done seamlessly. Exploiting the combination of four features (acceleration, pressure, size, and time) extracted from smart phone sensors, we propose a non-intrusive user verification mechanism to substantiate whether an authenticating user is the true owner of the smart phone or an impostor who happens to know the pass code. Based on the tapping data collected from over 80 users, we conduct a series of experiments to validate the efficacy of our proposed system. Our experimental results show that our verification system achieves high accuracy with averaged equal error rates of down to 3.65%. As our verification system can be seamlessly integrated with the existing user authentication mechanisms on smart phones, its deployment and usage are transparent to users and do not require any extra hardware support.",
"title": ""
}
] |
[
{
"docid": "24fea8f85c2fac8bd8278a153ab64a90",
"text": "In this paper, we describe an approach for learning planning domain models directly from natural language (NL) descriptions of activity sequences. The modelling problem has been identified as a bottleneck for the widespread exploitation of various technologies in Artificial Intelligence, including automated planners. There have been great advances in modelling assisting and model generation tools, including a wide range of domain model acquisition tools. However, for modelling tools, there is the underlying assumption that the user can formulate the problem using some formal language. And even in the case of the domain model acquisition tools, there is still a requirement to specify input plans in an easily machine readable format. Providing this type of input is impractical for many potential users. This motivates us to generate planning domain models directly from NL descriptions, as this would provide an important step in extending the widespread adoption of planning techniques. We start from NL descriptions of actions and use NL analysis to construct structured representations, from which we construct formal representations of the action sequences. The generated action sequences provide the necessary structured input for inducing a PDDL domain, using domain model acquisition technology. In order to capture a concise planning model, we use an estimate of functional similarity, so sentences that describe similar behaviours are represented by the same planning operator. We validate our approach with a user study, where participants are tasked with describing the activities occurring in several videos. Then our system is used to learn planning domain models using the participants’ NL input. We demonstrate that our approach is effective at learning models on these tasks. Introduction Modelling problems appropriately for use by a computer program has been identified as a key bottleneck in the exploitation of various AI technologies. In Automated Planning, this has inspired a growing body of work that aims to support the modelling process including domain acquisition tools, which learn a formal domain model of a system from some form of input data. There is interest in applying domain model acquisition across a range of research and application areas. For example within the business process community (Hoffmann, Weber, and Kraft 2012) and Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. space applications (Frank et al. 2011). An extended version of the LOCM domain model acquisition system (Cresswell, McCluskey, and West 2009) has also been used to help in the development of a puzzle game (Ersen and Sariel 2015) based on spatio-temporal reasoning. Web Service Composition is another area in which domain model acquisition techniques have been used (Walsh and Littman 2008). These tools vary in the specifics of the input language, such as example action sequences (Cresswell, McCluskey, and West 2009; Cresswell and Gregory 2011), or action sequences and a partial domain model (McCluskey et al. 2009; Richardson 2008); the query system by which they acquire the input data, which is typically static training sets, although there are examples working with an interactive querying system (Walsh and Littman 2008; Mehta, Tadepalli, and Fern 2011); and the target model language, including STRIPS (Cresswell, McCluskey, and West 2009; Cresswell and Gregory 2011), probabilistic (Mourão, Petrick, and Steedman 2010), and numeric (Gregory and Lindsay 2016; Hayton et al. 2016). However, in each case the user is left the responsibility of defining a formal representation for the solution. Defining these logical formalisms and applying them consistently requires time and experience in both the target domain and in the representation language, which many potential users will not have. It is therefore important to consider alternative input languages, such as Natural Language (Goldwasser and Roth 2011). Natural Language (NL) input is the most natural way for humans to interact and it is no surprise that there is much interest in using NL as input for computer systems. In day-to-day life, Siri and its competitors are controlled by simple spoken word input, but can activate complex procedures on our phones. In the RoboCup@Home competitions robots are controlled by task descriptions and are automatically translated into a series of simple actions that can be performed on the robot. And NL lessons have been used to learn partial representations of the world dynamics for game-like environments (Goldwasser and Roth 2011). A key aspect of these systems is an underlying language, which the NL input is mapped onto. For example, in the case of RoboCup@Home, an input of ‘go to the living room’ might be mapped onto quite a different representation, using the action name ‘move’ and requiring a set of parameters that break the movement into smaller",
"title": ""
},
{
"docid": "b0a1401136b75cfae05e7a8b31a0331c",
"text": "Voice interfaces are becoming accepted widely as input methods for a diverse set of devices. This development is driven by rapid improvements in automatic speech recognition (ASR), which now performs on par with human listening in many tasks. These improvements base on an ongoing evolution of deep neural networks (DNNs) as the computational core of ASR. However, recent research results show that DNNs are vulnerable to adversarial perturbations, which allow attackers to force the transcription into a malicious output. In this paper, we introduce a new type of adversarial examples based on psychoacoustic hiding. Our attack exploits the characteristics of DNN-based ASR systems, where we extend the original analysis procedure by an additional backpropagation step. We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i.e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception. To further minimize the perceptibility of the perturbations, we use forced alignment to find the best fitting temporal alignment between the original audio sample and the malicious target transcription. These extensions allow us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal. In an experimental evaluation, we attack the state-of-the-art speech recognition system Kaldi and determine the best performing parameter and analysis setup for different types of input. Our results show that we are successful in up to 98% of cases with a computational effort of fewer than two minutes for a ten-second audio file. Based on user studies, we found that none of our target transcriptions were audible to human listeners, who still understand the original speech content with unchanged accuracy.",
"title": ""
},
{
"docid": "ddeb76fa4315ee274bf1aa7ac014b6a2",
"text": "Linked Data offers new opportunities for Semantic Web-based application development by connecting structured information from various domains. These technologies allow machines and software agents to automatically interpret and consume Linked Data and provide users with intelligent query answering services. In order to enable advanced and innovative semantic applications of Linked Data such as recommendation, social network analysis, and information clustering, a fundamental requirement is systematic metrics that allow comparison between resources. In this research, we develop a hybrid similarity metric based on the characteristics of Linked Data. In particular, we develop and demonstrate metrics for providing recommendations of closely related resources. The results of our preliminary experiments and future directions are also presented.",
"title": ""
},
{
"docid": "7dc5270a9cf4eaf4af89e4e4bb4d1e90",
"text": "OBJECTIVE\nThe purpose of this study was to determine whether the use of race-specific definitions of short femur and humerus lengths improves Down syndrome detection.\n\n\nMETHODS\nThis was a retrospective cohort study over 16 years. For each self-reported maternal race (white, African American, Hispanic, and Asian), we evaluated the efficiency of Down syndrome detection using published race-specific formulas compared with a standard formula for short femur and humerus lengths (observed versus expected lengths < or =0.91 and < or =0.89, respectively). The sensitivity, specificity, and 95% confidence intervals for each parameter were compared. Screening performance was compared by areas under the receiver operating characteristic curves.\n\n\nRESULTS\nOf 58,710 women, 209 (0.3%) had a diagnosis of a fetus with Down syndrome. Although the race-based formula increased sensitivity in each population, the increase was statistically significant only in the white population, whereas a decrease in specificity was statistically significant in all 4 populations, as denoted by nonoverlapping confidence intervals. The area under the receiver operating characteristic curve for the model using the race-specific definition of short femur length was 0.67 versus 0.65 compared with the standard definition, and for humerus length it was 0.70 versus 0.71.\n\n\nCONCLUSIONS\nThe use of race-based formulas for the determination of short femur and humerus lengths did not significantly improve the detection rates for Down syndrome.",
"title": ""
},
{
"docid": "3d56f88bf8053258a12e609129237b19",
"text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.",
"title": ""
},
{
"docid": "ec40606c46cc1bd3e1d4c64793a8ca83",
"text": "Thin-layer chromatography (TLC) and liquid chromatography (LC) methods were developed for the qualitative and quantitative determination of agrimoniin, pedunculagin, ellagic acid, gallic acid, and catechin in selected herbal medicinal products from Rosaceae: Anserinae herba, Tormentillae rhizoma, Alchemillae herba, Agrimoniae herba, and Fragariae folium. Unmodified silica gel (TLC Si60, HPTLC LiChrospher Si60) and silica gel chemically modified with octadecyl or aminopropyl groups (HPTLC RP18W and HPTLC NH2) were used for TLC. The best resolution and selectivity were achieved with the following mobile phases: diisopropyl ether-acetone-formic acid-water (40 + 30 + 20 + 10, v/v/v/v), tetrahydrofuran-acetonitrile-water (30 + 10 + 60, v/v/v), and acetone-formic acid (60 + 40, v/v). Concentrations of the studied herbal drugs were determined by using a Chromolith Performance RP-18e column with acetonitrile-water-formic acid as the mobile phase. Determinations of linearity, range, detection and quantitation limits, accuracy, precision, and robustness showed that the HPLC method was sufficiently precise for estimation of the tannins and related polyphenols mentioned above. Investigations of suitable solvent selection, sample extraction procedure, and short-time stability of analytes at storage temperatures of 4 and 20 degrees C were also performed. The percentage of agrimoniin in pharmaceutical products was between 0.57 and 3.23%.",
"title": ""
},
{
"docid": "7fc60611cf6ce5eded2cfb65f05d7cd7",
"text": "In this letter, a new design for single-feed dual-band circularly polarized microstrip antennas is presented. A stacked- patch configuration is used for the antenna, and circular polarization is achieved by designing asymmetrical U-slots on the patches. The dimensions of the U-slots are optimized to achieve circular polarization in both bands. A prototype has been designed to operate at two frequencies with a ratio of 1.66. Both experimental and theoretical results are presented and discussed. The circularly polarized bandwidth of the antenna is 1.0% at 3.5 GHz (WiMax) and 3.1% at 5.8 GHz (HiperLAN).",
"title": ""
},
{
"docid": "3f6a61bf0c3b9c81d24951ed8fa39b04",
"text": "In this paper, we consider argument mining as the task of buil ding a formal representation for an argumentative piece of text. Our goal is to provide a criti cal survey of the literature on both the resulting representations (i.e., argument diagrammin g techniques) and on the various aspects of the automatic analysis process. For representation, we a lso provide a synthesized proposal of a scheme that combines advantages from several of the earlier approaches; in addition, we discuss the relationship between representing argument structure and the rhetorical structure of texts in the sense of Mann and Thompsons (1988) RST. Then, for the argu ment mining problem, we also cover the literature on closely-related tasks that have bee n tackled in Computational Linguistics, because we think that these can contribute to more powerful a rg ment mining systems than the first prototypes that were built in recent years. The paper co ncludes with our suggestions for the major challenges that should be addressed in the field of argu ment mining.",
"title": ""
},
{
"docid": "b69e3e8eda027300a66813a9a7afba5c",
"text": "Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"title": ""
},
{
"docid": "a094869c9f79d0fccbc6892a345fec8b",
"text": "Recent years have seen an exploration of data volumes from a myriad of IoT devices, such as various sensors and ubiquitous cameras. The deluge of IoT data creates enormous opportunities for us to explore the physical world, especially with the help of deep learning techniques. Traditionally, the Cloud is the option for deploying deep learning based applications. However, the challenges of Cloud-centric IoT systems are increasing due to significant data movement overhead, escalating energy needs, and privacy issues. Rather than constantly moving a tremendous amount of raw data to the Cloud, it would be beneficial to leverage the emerging powerful IoT devices to perform the inference task. Nevertheless, the statically trained model could not efficiently handle the dynamic data in the real in-situ environments, which leads to low accuracy. Moreover, the big raw IoT data challenges the traditional supervised training method in the Cloud. To tackle the above challenges, we propose In-situ AI, the first Autonomous and Incremental computing framework and architecture for deep learning based IoT applications. We equip deep learning based IoT system with autonomous IoT data diagnosis (minimize data movement), and incremental and unsupervised training method (tackle the big raw IoT data generated in ever-changing in-situ environments). To provide efficient architectural support for this new computing paradigm, we first characterize the two In-situ AI tasks (i.e. inference and diagnosis tasks) on two popular IoT devices (i.e. mobile GPU and FPGA) and explore the design space and tradeoffs. Based on the characterization results, we propose two working modes for the In-situ AI tasks, including Single-running and Co-running modes. Moreover, we craft analytical models for these two modes to guide the best configuration selection. We also develop a novel two-level weight shared In-situ AI architecture to efficiently deploy In-situ tasks to IoT node. Compared with traditional IoT systems, our In-situ AI can reduce data movement by 28-71%, which further yields 1.4X-3.3X speedup on model update and contributes to 30-70% energy saving.",
"title": ""
},
{
"docid": "087592a720a9d2203c0884e4c6798c79",
"text": "Hard disk encryption is known to be vulnerable to a number of attacks that aim to directly extract cryptographic key material from system memory. Several approaches to preventing this class of attacks have been proposed, including Tresor [18] and LoopAmnesia [25]. The common goal of these systems is to confine the encryption key and encryption process itself to the CPU, such that sensitive key material is never released into system memory where it could be accessed by a DMA attack.\n In this work, we demonstrate that these systems are nevertheless vulnerable to such DMA attacks. Our attack, which we call Tresor-Hunt, relies on the insight that DMA-capable adversaries are not restricted to simply reading physical memory, but can write arbitrary values to memory as well. Tresor-Hunt leverages this insight to inject a ring 0 attack payload that extracts disk encryption keys from the CPU into the target system's memory, from which it can be retrieved using a normal DMA transfer.\n Our implementation of this attack demonstrates that it can be constructed in a reliable and OS-independent manner that is applicable to any CPU-bound encryption technique, IA32-based system, and DMA-capable peripheral bus. Furthermore, it does not crash the target system or otherwise significantly compromise its integrity. Our evaluation supports the OS-independent nature of the attack, as well as its feasibility in real-world scenarios. Finally, we discuss several countermeasures that might be adopted to mitigate this attack and render CPU-bound encryption systems viable.",
"title": ""
},
{
"docid": "1d0af4c5ee3a91a140a198c4ffa828d6",
"text": "A frequency tunable antenna for 4G global roaming devices operating in the 2.3-2.7GHz band is presented. Both the design and manufacturing methods are described and measured data are provided. Antenna is a half-patch with a reconfigurable aperture realized by a collection of shorting pins that are controlled by DC signals. The design follows the teachings of the patented self-structuring antenna technology and can be operated either in open or closed loop fashion. The frequency tunable feature of the antenna also makes it immune to detuning when used in a closed loop control system. Though the design is compatible with a multitude of manufacturing and embedding methods, the particular prototype was built by wire-bonding bare-die SPST switches onto the antenna board.",
"title": ""
},
{
"docid": "d761b2718cfcabe37b72768962492844",
"text": "In the most recent years, wireless communication networks have been facing a rapidly increasing demand for mobile traffic along with the evolvement of applications that require data rates of several 10s of Gbit/s. In order to enable the transmission of such high data rates, two approaches are possible in principle. The first one is aiming at systems operating with moderate bandwidths at 60 GHz, for example, where 7 GHz spectrum is dedicated to mobile services worldwide. However, in order to reach the targeted date rates, systems with high spectral efficiencies beyond 10 bit/s/Hz have to be developed, which will be very challenging. A second approach adopts moderate spectral efficiencies and requires ultra high bandwidths beyond 20 GHz. Such an amount of unregulated spectrum can be identified only in the THz frequency range, i.e. beyond 300 GHz. Systems operated at those frequencies are referred to as THz communication systems. The technology enabling small integrated transceivers with highly directive, steerable antennas becomes the key challenges at THz frequencies in face of the very high path losses. This paper gives an overview over THz communications, summarizing current research projects, spectrum regulations and ongoing standardization activities.",
"title": ""
},
{
"docid": "ce37a5fa510c34fd7246cde1f11c6e5d",
"text": "A new manipulation approach, referred to as interleaved continuum-rigid manipulation, which combines inherently safe, flexible actuated segments with more precise embedded rigid-link joints has recently been introduced [1], [2]. The redundantly actuated manipulator possesses the safety characteristics inherent in flexible segment devices while gaining some of the performance attributes of rigid-link joint systems. In this paper, we describe a general controller developed for an interleaved manipulator. The controller is implemented on a clinically-relevant prototype, the results of which demonstrate the advantages of an interleaved manipulator. We also consider kinematic drivers of the interleaved manipulator workspace, showing that careful kinematic considerations can substantially improve manipulator workspace and task accuracy.",
"title": ""
},
{
"docid": "d63946a096b9e8a99be6d5ddfe4097da",
"text": "While the first open comparative challenges in the field of paralinguistics targeted more ‘conventional’ phenomena such as emotion, age, and gender, there still exists a multiplicity of not yet covered, but highly relevant speaker states and traits. The INTERSPEECH 2011 Speaker State Challenge thus addresses two new sub-challenges to overcome the usually low compatibility of results: In the Intoxication Sub-Challenge, alcoholisation of speakers has to be determined in two classes; in the Sleepiness Sub-Challenge, another two-class classification task has to be solved. This paper introduces the conditions, the Challenge corpora “Alcohol Language Corpus” and “Sleepy Language Corpus”, and a standard feature set that may be used. Further, baseline results are given.",
"title": ""
},
{
"docid": "be49c21abb971f31690fce9dc553e54b",
"text": "In last decade, various agile methods have been introduced and used by software industry. It has been observed that many practitioners are using hybrid of agile methods and traditional methods. The knowledge of agile software development process about the theoretical grounds, applicability in large development settings and connections to establish software engineering disciplines remain mostly in dark. It has been reported that it is difficult for average manager to implement agile method in the organization. Further, every agile method has its own development cycle that brings technological, managerial and environmental changes in organization. A proper roadmap of agile software development in the form of agile software development life cycle can be developed to address the aforesaid issues of agile software development process. Thus, there is strong need of agile software development life cycle that clearly defines the phases included in any agile method and also describes the artifacts of each phase. This generalization of agile software development life cycle provides the guideline for average developers about usability, suitability, applicability of agile methods. Keywords-Agile software Development; extreme Programming; Adaptive software developmen; Scrum; Agile Method;story.",
"title": ""
},
{
"docid": "5227121a2feb59fc05775e2623239da9",
"text": "BACKGROUND\nCriminal offenders with a diagnosis of psychopathy or borderline personality disorder (BPD) share an impulsive nature but tend to differ in their style of emotional response. This study aims to use multiple psychophysiologic measures to compare emotional responses to unpleasant and pleasant stimuli.\n\n\nMETHODS\nTwenty-five psychopaths as defined by the Hare Psychopathy Checklist and 18 subjects with BPD from 2 high-security forensic treatment facilities were included in the study along with 24 control subjects. Electrodermal response was used as an indicator of emotional arousal, modulation of the startle reflex as a measure of valence, and electromyographic activity of the corrugator muscle as an index of emotional expression.\n\n\nRESULTS\nCompared with controls, psychopaths were characterized by decreased electrodermal responsiveness, less facial expression, and the absence of affective startle modulation. A higher percentage of psychopaths showed no startle reflex. Subjects with BPD showed a response pattern very similar to that of controls, ie, they showed comparable autonomic arousal, and their startle responses were strongest to unpleasant slides and weakest to pleasant slides. However, corrugator electromyographic activity in subjects with BPD demonstrated little facial modulation when they viewed either pleasant or unpleasant slides.\n\n\nCONCLUSIONS\nThe results support the theory that psychopaths are characterized by a pronounced lack of fear in response to aversive events. Furthermore, the results suggest a general deficit in processing affective information, regardless of whether stimuli are negative or positive. Emotional hyporesponsiveness was specific to psychopaths, since results for offenders with BPD indicate a widely adequate processing of emotional stimuli.",
"title": ""
},
{
"docid": "4cf669d93a62c480f4f6795f47744bc8",
"text": "We present an estimate of an upper bound of 1.75 bits for the entropy of characters in printed English, obtained by constructing a word trigram model and then computing the cross-entropy between this model and a balanced sample of English text. We suggest the well-known and widely available Brown Corpus of printed English as a standard against which to measure progress in language modeling and offer our bound as the first of what we hope will be a series of steadily decreasing bounds.",
"title": ""
},
{
"docid": "b83cd79ce5086124ab7920ab589e61bf",
"text": "Many of today’s most successful video segmentation methods use long-term feature trajectories as their first processing step. Such methods typically use spectral clustering to segment these trajectories, implicitly assuming that motion is translational in image space. In this paper, we explore the idea of explicitly fitting more general motion models in order to classify trajectories as foreground or background. We find that homographies are sufficient to model a wide variety of background motions found in real-world videos. Our simple approach achieves competitive performance on the DAVIS benchmark, while using techniques complementary to state-of-the-art approaches.",
"title": ""
},
{
"docid": "5415bb23210d1e0c370cf2ab0898affc",
"text": "PURPOSE\nTo compare a developmental indirect resin composite with an established, microfilled directly placed resin composite used to restore severely worn teeth. The cause of the tooth wear was a combination of erosion and attrition.\n\n\nMATERIALS AND METHODS\nOver a 3-year period, a total of 32 paired direct or indirect microfilled resin composite restorations were placed on premolars and molars in 16 patients (mean age: 43 years, range: 25 to 62) with severe tooth wear. A further 26 pairs of resin composite were placed in 13 controls (mean age: 39 years, range 28 to 65) without evidence of tooth wear. The material was randomly selected for placement in the left or right sides of the mouth.\n\n\nRESULTS\nSixteen restorations were retained in the tooth wear group (7 indirect and 9 direct), 7 (22%) fractured (4 indirect and 3 direct), and 9 (28%) were completely lost (5 indirect and 4 direct). There was no statistically significant difference in failure rates between the materials in this group. The control group had 21 restorations (80%) that were retained (10 indirect and 12 direct), a significantly lower rate of failure than in the tooth wear patients (P = .027).\n\n\nCONCLUSION\nThe results of this short-term study suggest that the use of direct and indirect resin composites for restoring worn posterior teeth is contraindicated.",
"title": ""
}
] |
scidocsrr
|
5d5721eda2536fe2bc410c354e0f94fb
|
Anomaly Machine Component Detection by Deep Generative Model with Unregularized Score
|
[
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "7b6231f2e0fe08e2f72bf45176b5481f",
"text": "PCA is a classical statistical technique whose simplicity and maturity has seen it find widespread use for anomaly detection. However, it is limited in this regard by being sensitive to gross perturbations of the input, and by seeking a linear subspace that captures normal behaviour. The first issue has been dealt with by robust PCA, a variant of PCA that explicitly allows for some data points to be arbitrarily corrupted; however, this does not resolve the second issue, and indeed introduces the new issue that one can no longer inductively find anomalies on a test set. This paper addresses both issues in a single model, the robust autoencoder. This method learns a nonlinear subspace that captures the majority of data points, while allowing for some data to have arbitrary corruption. The model is simple to train and leverages recent advances in the optimisation of deep neural networks. Experiments on a range of real-world datasets highlight the model’s effectiveness.",
"title": ""
},
{
"docid": "3f255fa3dcb8b027f1736b30e98254f9",
"text": "We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally involving a small move, so this conditional distribution has fewer dominant modes, being unimodal in the limit of small moves. Thus, it is easier to learn because it is easier to approximate its partition function, more like learning to perform supervised function approximation, with gradients that can be obtained by backprop. We provide theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood, along with a definition of an appropriate joint distribution and sampling mechanism even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. We validate these theoretical results with experiments on two image datasets using an architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows training to proceed with simple backprop, without the need for layerwise pretraining.",
"title": ""
}
] |
[
{
"docid": "d6ca38ccad91c0c2c51ba3dd5be454b2",
"text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.",
"title": ""
},
{
"docid": "58ee9935e8111cf1fe0c09c7f61d7d07",
"text": "Distributional reinforcement learning (distributional RL) has seen empirical success in complex Markov Decision Processes (MDPs) in the setting of nonlinear function approximation. However there are many different ways in which one can leverage the distributional approach to reinforcement learning. In this paper, we propose GAN Q-learning, a novel distributional RL method based on generative adversarial networks (GANs) and analyze its performance in simple tabular environments, as well as OpenAI Gym. We empirically show that our algorithm leverages the flexibility and blackbox approach of deep learning models while providing a viable alternative to traditional methods.",
"title": ""
},
{
"docid": "8bcb5b946b9f5e07807ec9a44884cf4e",
"text": "Using data from two waves of a panel study of families who currently or recently received cash welfare benefits, we test hypotheses about the relationship between food hardships and behavior problems among two different age groups (458 children ages 3–5-and 747 children ages 6–12). Results show that food hardships are positively associated with externalizing behavior problems for older children, even after controlling for potential mediators such as parental stress, warmth, and depression. Food hardships are positively associated with internalizing behavior problems for older children, and with both externalizing and internalizing behavior problems for younger children, but these effects are mediated by parental characteristics. The implications of these findings for child and family interventions and food assistance programs are discussed. Food Hardships and Child Behavior Problems among Low-Income Children INTRODUCTION In the wake of the 1996 federal welfare reforms, several large-scale, longitudinal studies of welfare recipients and low-income families were launched with the intent of assessing direct benchmarks, such as work and welfare activity, over time, as well as indirect and unintended outcomes related to material hardship and mental health. One area of special concern to many researchers and policymakers alike is child well-being in the context of welfare reforms. As family welfare use and parental work activities change under new welfare policies, family income and material resources may also fluctuate. To the extent that family resources are compromised by changes in welfare assistance and earnings, children may experience direct hardships, such as instability in food consumption, which in turn may affect other areas of functioning. It is also possible that changes in parental work and family welfare receipt influence children indirectly through their caregivers. As parents themselves experience hardships or new stresses, their mental health and interactions with their children may change, which in turn could affect their children’s functioning. This research assesses whether one particular form of hardship, food hardship, is associated with adverse behaviors among low-income children. Specifically, analyses assess whether food hardships have relationships with externalizing (e.g., aggressive or hyperactive) and internalizing (e.g., anxietyand depression-related) child behavior problems, and whether associations between food hardships and behavior problems are mediated by parental stress, warmth, and depression. The study involves a panel survey of individuals in one state who were receiving Temporary Assistance for Needy Families (TANF) in 1998 and were caring for minor-aged children. Externalizing and internalizing behavior problems associated with a randomly selected child from each household are assessed in relation to key predictors, taking advantage of the prospective study design. 2 BACKGROUND Food hardships have been conceptualized by researchers in various ways. For example, food insecurity is defined by the U.S. Department of Agriculture (USDA) as the “limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways” (Bickel, Nord, Price, Hamilton, and Cook, 2000, p. 6). An 18-item scale was developed by the USDA to assess household food insecurity with and without hunger, where hunger represents a potential result of more severe forms of food insecurity, but not a necessary condition for food insecurity to exist (Price, Hamilton, and Cook, 1997). Other researchers have used selected items from the USDA Food Security Module to assess food hardships (Nelson, 2004; Bickel et al., 2000) The USDA also developed the following single-item question to identify food insufficiency: “Which of the following describes the amount of food your household has to eat....enough to eat, sometimes not enough to eat, or often not enough to eat?” This measure addresses the amount of food available to a household, not assessments about the quality of the food consumed or worries about food (Alaimo, Olson and Frongillo, 1999; Dunifon and Kowaleski-Jones, 2003). The Community Childhood Hunger Identification Project (CCHIP) assesses food hardships using an 8-item measure to determine whether the household as a whole, adults as individuals, or children are affected by food shortages, perceived food insufficiency, or altered food intake due to resource constraints (Wehler, Scott, and Anderson, 1992). Depending on the number of affirmative answers, respondents are categorized as either “hungry,” “at-risk for hunger,” or “not hungry” (Wehler et al., 1992; Kleinman et al., 1998). Other measures, such as the Radimer/Cornell measures of hunger and food insecurity, have also been created to measure food hardships (Kendall, Olson, and Frongillo, 1996). In recent years, food hardships in the United States have been on the rise. After declining from 1995 to 1999, the prevalence of household food insecurity in households with children rose from 14.8 percent in 1999 to 16.5 percent in 2002, and the prevalence of household food insecurity with hunger in households with children rose from 0.6 percent in 1999 to 0.7 percent in 2002 (Nord, Andrews, and 3 Carlson, 2003). A similar trend was also observed using a subset of questions from the USDA Food Security Module (Nelson, 2004). Although children are more likely than adults to be buffered from household food insecurity (Hamilton et al., 1997) and inadequate nutrition (McIntyre et al., 2003), a concerning number of children are reported to skip meals or have reduced food intake due to insufficient household resources. Nationally, children in 219,000 U.S. households were hungry at times during the 12 months preceding May 1999 (Nord and Bickel, 2002). Food Hardships and Child Behavior Problems Very little research has been conducted on the effects of food hardship on children’s behaviors, although the existing research suggests that it is associated with adverse behavioral and mental health outcomes for children. Using data from the National Health and Nutrition Examination Survey (NHANES), Alaimo and colleagues (2001a) found that family food insufficiency is positively associated with visits to a psychologist among 6to 11year-olds. Using the USDA Food Security Module, Reid (2002) found that greater severity and longer periods of children’s food insecurity were associated with greater levels of child behavior problems. Dunifon and Kowaleski-Jones (2003) found, using the same measure, that food insecurity is associated with fewer positive behaviors among school-age children. Children from households with incomes at or below 185 percent of the poverty level who are identified as hungry are also more likely to have a past or current history of mental health counseling and to have more psychosocial dysfunctions than children who are not identified as hungry (Kleinman et al., 1998; Murphy et al., 1998). Additionally, severe child hunger in both pre-school-age and school-age children is associated with internalizing behavior problems (Weinreb et al., 2002), although Reid (2002) found a stronger association between food insecurity and externalizing behaviors than between food insecurity and internalizing behaviors among children 12 and younger. Other research on hunger has identified several adverse behavioral consequences for children (See Wachs, 1995 for a review; Martorell, 1996; Pollitt, 1994), including poor play behaviors, poor preschool achievement, and poor scores on 4 developmental indices (e.g., Bayley Scores). These studies have largely taken place in developing countries, where the prevalence of hunger and malnutrition is much greater than in the U.S. population (Reid, 2002), so it is not known whether similar associations would emerge for children in the United States. Furthermore, while existing studies point to a relationship between food hardships and adverse child behavioral outcomes, limitations in design stemming from cross-sectional data, reliance on singleitem measures of food difficulties, or failure to adequately control for factors that may confound the observed relationships make it difficult to assess the robustness of the findings. For current and recent recipients of welfare and their families, increased food hardships are a potential problem, given the fluctuations in benefits and resources that families are likely to experience as a result of legislative reforms. To the extent that food hardships are tied to economic factors, we may expect levels of food hardships to increase for families who experience periods of insufficient material resources, and to decrease for families whose economic situations improve. If levels of food hardship are associated with the availability of parents and other caregivers, we may find that the provision of food to children changes as parents work more hours, or as children spend more time in alternative caregiving arrangements. Poverty and Child Behavior Problems When exploring the relationship between food hardships and child well-being, it is crucial to ensure that factors associated with economic hardship and poverty are adequately controlled, particularly since poverty has been linked to some of the same outcomes as food hardships. Extensive research has shown a higher prevalence of behavior problems among children from families of lower socioeconomic status (McLoyd, 1998; Duncan, Brooks-Gunn, and Klebanov, 1994), and from families receiving welfare (Hofferth, Smith, McLoyd, and Finkelstein, 2000). This relationship has been shown to be stronger among children in single-parent households than among those in two-parent households (Hanson, McLanahan, and Thompson, 1996), and among younger children (Bradley and Corwyn, 2002; McLoyd, 5 1998), with less consistent findings for adolescents (Conger, Conger, and Elder, 1997; Elder, N",
"title": ""
},
{
"docid": "66f46290a9194d4e982b8d1b59a73090",
"text": "Sensor to body calibration is a key requirement for capturing accurate body movements in applications based on wearable systems. In this paper, we consider the specific problem of estimating the positions of multiple inertial measurement units (IMUs) relative to the adjacent body joints. To derive an efficient, robust and precise method based on a practical procedure is a crucial as well as challenging task when developing a wearable system with multiple embedded IMUs. In this work, first, we perform a theoretical analysis of an existing position calibration method, showing its limited applicability for the hip and knee joint. Based on this, we propose a method for simultaneously estimating the positions of three IMUs (mounted on pelvis, upper leg, lower leg) relative to these joints. The latter are here considered as an ensemble. Finally, we perform an experimental evaluation based on simulated and real data, showing the improvements of our calibration method as well as lines of future work.",
"title": ""
},
{
"docid": "3e845c9a82ef88c7a1f4447d57e35a3e",
"text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.",
"title": ""
},
{
"docid": "49aa556fa64cf5cc9e524cbd4b27d426",
"text": "In this paper, we focus on tackling the problem of automatic accurate localization of detected objects in high-resolution remote sensing images. The two major problems for object localization in remote sensing images caused by the complex context information such images contain are achieving generalizability of the features used to describe objects and achieving accurate object locations. To address these challenges, we propose a new object localization framework, which can be divided into three processes: region proposal, classification, and accurate object localization process. First, a region proposal method is used to generate candidate regions with the aim of detecting all objects of interest within these images. Then, generic image features from a local image corresponding to each region proposal are extracted by a combination model of 2-D reduction convolutional neural networks (CNNs). Finally, to improve the location accuracy, we propose an unsupervised score-based bounding box regression (USB-BBR) algorithm, combined with a nonmaximum suppression algorithm to optimize the bounding boxes of regions that detected as objects. Experiments show that the dimension-reduction model performs better than the retrained and fine-tuned models and the detection precision of the combined CNN model is much higher than that of any single model. Also our proposed USB-BBR algorithm can more accurately locate objects within an image. Compared with traditional features extraction methods, such as elliptic Fourier transform-based histogram of oriented gradients and local binary pattern histogram Fourier, our proposed localization framework shows robustness when dealing with different complex backgrounds.",
"title": ""
},
{
"docid": "08ca7be2334de477905e8766c8612c8f",
"text": "a r t i c l e i n f o a b s t r a c t A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.",
"title": ""
},
{
"docid": "a9dd71d336baa0ea78ceb0435be67f67",
"text": "In current credit ratings models, various accounting-based information are usually selected as prediction variables, based on historical information rather than the market’s assessment for future. In the study, we propose credit rating prediction model using market-based information as a predictive variable. In the proposed method, Moody’s KMV (KMV) is employed as a tool to evaluate the market-based information of each corporation. To verify the proposed method, using the hybrid model, which combine random forests (RF) and rough set theory (RST) to extract useful information for credit rating. The results show that market-based information does provide valuable information in credit rating predictions. Moreover, the proposed approach provides better classification results and generates meaningful rules for credit ratings. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0a09f894029a0b8730918c14906dca9e",
"text": "In the last few years, machine learning has become a very popular tool for analyzing financial text data, with many promising results in stock price forecasting from financial news, a development with implications for the E cient Markets Hypothesis (EMH) that underpins much economic theory. In this work, we explore recurrent neural networks with character-level language model pre-training for both intraday and interday stock market forecasting. In terms of predicting directional changes in the Standard & Poor’s 500 index, both for individual companies and the overall index, we show that this technique is competitive with other state-of-the-art approaches.",
"title": ""
},
{
"docid": "aeaee20b184e346cd469204dcf49d815",
"text": "Naresh Kumari , Nitin Malik , A. N. Jha , Gaddam Mallesham #*4 # Department of Electrical, Electronics and Communication Engineering, The NorthCap University, Gurgaon, India 1 nareshkumari@ncuindia.edu 2 nitinmalik77@gmail.com * Ex-Professor, Electrical Engineering, Indian Institute of Technology, New Delhi, India 3 anjha@ee.iitd.ac.in #* Department of Electrical Engineering, Osmania University, Hyderabad, India 4 gm.eed.cs@gmail.com",
"title": ""
},
{
"docid": "ed0d82bcc688a0101ae914ee208a6e13",
"text": "Visual recognition systems mounted on autonomous moving agents face the challenge of unconstrained data, but simultaneously have the opportunity to improve their performance by moving to acquire new views of test data. In this work, we first show how a recurrent neural network-based system may be trained to perform end-to-end learning of motion policies suited for the “active recognition” setting. Further, we hypothesize that active vision requires an agent to have the capacity to reason about the effects of its motions on its view of the world. To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent’s motions on its internal representation of its cumulative knowledge obtained from all past views. Results across two challenging datasets confirm both that our end-toend system successfully learns meaningful policies for active recognition, and that “learning to look ahead” further boosts recognition performance.",
"title": ""
},
{
"docid": "ece03e1f4d2d129daafebc63872a41e2",
"text": "With the development of Internet, social networks have become important platforms which allow users to follow streams of posts generated by their friends and acquaintances. Through mining a collection of nodes with similarities, community detection can make us understand the characteristics of complex network deeply. Therefore, community detection has attracted increasing attention in recent years. Since targeted at on-line social networks, we investigate how to exploit user's profile and topological structure information in social circle discovery. Firstly, according to directionality of linkages, we put forward inlink Salton metric and out-link Salton metric to measure user's topological structure. Then we propose an improved density peaks-based clustering method and deploy it to discover social circles with overlap on account of user's profileand topological structure-based features. Experiments on real-world dataset demonstrate the effectiveness of the proposed framework. Further experiments are conducted to understand the importance of different parameters and different features in social circle discovery. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "658385e047ab382c014d53a4c086dbfb",
"text": "IP-based Internet is the largest network in the world therefore; there are excessive steps towards connecting Wireless Sensor Networks (WSNs) to the Internet. It is popularly known as to IoT (Internet of Things). IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks such as IoT. Development of application requires standardized architecture and platform for design and analysis of new ideas. This paper provides a brief awareness about recent IoT architectures and platforms. It is also discussed some of the gaps issues of the platforms related to usability of the user. This helps researcher to select a particular platform according to need.",
"title": ""
},
{
"docid": "42aa520e1c46749e7abc924c0f56442d",
"text": "Internet of Things is evolving heavily in these times. One of the major obstacle is energy consumption in the IoT devices (sensor nodes and wireless gateways). The IoT devices are often battery powered wireless devices and thus reducing the energy consumption in these devices is essential to lengthen the lifetime of the device without battery change. It is possible to lengthen battery lifetime by efficient but lightweight sensor data analysis in close proximity of the sensor. Performing part of the sensor data analysis in the end device can reduce the amount of data needed to transmit wirelessly. Transmitting data wirelessly is very energy consuming task. At the same time, the privacy and security should not be compromised. It requires effective but computationally lightweight encryption schemes. This survey goes thru many aspects to consider in edge and fog devices to minimize energy consumption and thus lengthen the device and the network lifetime.",
"title": ""
},
{
"docid": "b9733e699abaaedc380a45a3136f97da",
"text": "Generally speaking, anti-computer forensics is a set of techniques used as countermeasures to digital forensic analysis. When put into information and data perspective, it is a practice of making it hard to understand or find. Typical example being when programming code is often encoded to protect intellectual property and prevent an attacker from reverse engineering a proprietary software program.",
"title": ""
},
{
"docid": "c9c98e50a49bbc781047dc425a2d6fa1",
"text": "Understanding wound healing today involves much more than simply stating that there are three phases: \"inflammation, proliferation, and maturation.\" Wound healing is a complex series of reactions and interactions among cells and \"mediators.\" Each year, new mediators are discovered and our understanding of inflammatory mediators and cellular interactions grows. This article will attempt to provide a concise report of the current literature on wound healing by first reviewing the phases of wound healing followed by \"the players\" of wound healing: inflammatory mediators (cytokines, growth factors, proteases, eicosanoids, kinins, and more), nitric oxide, and the cellular elements. The discussion will end with a pictorial essay summarizing the wound-healing process.",
"title": ""
},
{
"docid": "c249c64b3e41cde156a63e1224ae2091",
"text": "The technology of intelligent agents and multi-agent systems seems set to radically alter the way in which complex, distributed, open systems are conceptualized and implemented. The purpose of this paper is to consider the problem of building a multi-agent system as a software engineering enterprise. The article focuses on three issues: (i) how agents might be specified; (ii) how these specifications might be refined or otherwise transformed into efficient implementations; and (iii) how implemented agents and multi-agent systems might subsequently be verified, in order to show that they are correct with respect to their specifications. These issues are discussed with reference to a number of casestudies. The article concludes by setting out some issues and open problems for future",
"title": ""
},
{
"docid": "74e15be321ec4e2d207f3331397f0399",
"text": "Interoperability has been a basic requirement for the modern information systems environment for over two decades. How have key requirements for interoperability changed over that time? How can we understand the full scope of interoperability issues? What has shaped research on information system interoperability? What key progress has been made? This chapter provides some of the answers to these questions. In particular, it looks at different levels of information system interoperability, while reviewing the changing focus of interoperability research themes, past achievements and new challenges in the emerging global information infrastructure (GII). It divides the research into three generations, and discusses some of achievements of the past. Finally, as we move from managing data to information, and in future knowledge, the need for achieving semantic interoperability is discussed and key components of solutions are introduced. Data and information interoperability has gained increasing attention for several reasons, including: • excellent progress in interconnection afforded by the Internet, Web and distributed computing infrastructures, leading to easy access to a large number of independently created and managed information sources of broad variety;",
"title": ""
},
{
"docid": "d18f9954bc8140fbf18e723f80523e8f",
"text": "A wideband circularly polarized reconfigurable patch antenna with L-shaped feeding probes is presented, which can generate unidirectional radiation performance that is switchable between left-hand circular polarization (LHCP) and right-hand circular polarization (RHCP). To realize this property, an L-probe fed square patch antenna is chosen as the radiator. A compact reconfigurable feeding network is implemented to excite the patch and generate either LHCP or RHCP over a wide operating bandwidth. The proposed antenna achieves the desired radiation patterns and has excellent characteristics, including a wide bandwidth, a compact structure, and a low profile. Measured results exhibit approximately identical performance for both polarization modes. Wide impedance, 31.6% from 1.2 to 1.65 GHz, and axial-ratio, 20.8% from 1.29 to 1.59 GHz, bandwidths are obtained. The gain is very stable across the entire bandwidth with a 6.9-dBic peak value. The reported circular-polarization reconfigurable antenna can mitigate the polarization mismatching problem in multipath wireless environments, increase the channel capacity of the system, and enable polarization coding.",
"title": ""
},
{
"docid": "9c9e3bcd8213739d2fab740b7010a1cd",
"text": "Data anonymization techniques have been the subject of intense investigation in recent years, for many kinds of structured data, including tabular, graph and item set data. They enable publication of detailed information, which permits ad hoc queries and analyses, while guaranteeing the privacy of sensitive information in the data against a variety of attacks. In this tutorial, we aim to present a unified framework of data anonymization techniques, viewed through the lens of uncertainty. Essentially, anonymized data describes a set of possible worlds, one of which corresponds to the original data. We show that anonymization approaches such as suppression, generalization, perturbation and permutation generate different working models of uncertain data, some of which have been well studied, while others open new directions for research. We demonstrate that the privacy guarantees offered by methods such as k-anonymization and l-diversity can be naturally understood in terms of similarities and differences in the sets of possible worlds that correspond to the anonymized data. We describe how the body of work in query evaluation over uncertain databases can be used for answering ad hoc queries over anonymized data in a principled manner. A key benefit of the unified approach is the identification of a rich set of new problems for both the Data Anonymization and the Uncertain Data communities.",
"title": ""
}
] |
scidocsrr
|
dba067a03b5868f07e4278f4b79fbff4
|
JPE 11-4-3 Control and Analysis of an Integrated Bidirectional DC / AC and DC / DC Converters for Plug-In Hybrid Electric Vehicle Applications
|
[
{
"docid": "68e3c37660f862e6a4af132ad1a9fa52",
"text": "Under the requirements of reducing emissions, air pollution and achieving higher fuel economy, companies are developing electric, hybrid electric, and plug-in hybrid electric vehicles. However, the high cost of these technologies and the low autonomy are very restrictive. In this paper a new concept of fast on-board battery charger for Electric Vehicles (EVs) is proposed which uses the electric motor like filter and the same converter for charging and traction mode.",
"title": ""
}
] |
[
{
"docid": "014ff12b51ce9f4399bca09e0dedabed",
"text": "The crystallographic preferred orientation (CPO) of olivine produced during dislocation creep is considered to be the primary cause of elastic anisotropy in Earth’s upper mantle and is often used to determine the direction of mantle flow. A fundamental question remains, however, as to whether the alignment of olivine crystals is uniquely produced by dislocation creep. Here we report the development of CPO in iron-free olivine (that is, forsterite) during diffusion creep; the intensity and pattern of CPO depend on temperature and the presence of melt, which control the appearance of crystallographic planes on grain boundaries. Grain boundary sliding on these crystallography-controlled boundaries accommodated by diffusion contributes to grain rotation, resulting in a CPO. We show that strong radial anisotropy is anticipated at temperatures corresponding to depths where melting initiates to depths where strongly anisotropic and low seismic velocities are detected. Conversely, weak anisotropy is anticipated at temperatures corresponding to depths where almost isotropic mantle is found. We propose diffusion creep to be the primary means of mantle flow.",
"title": ""
},
{
"docid": "bf8a24b974553d21849e9b066d78e6d4",
"text": "Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two sub-problems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an end-to-end transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a self-attention mechanism, which enables the use of efficient non-recurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this end-to-end model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.",
"title": ""
},
{
"docid": "fb71d22cad59ba7cf5b9806e37df3340",
"text": "Templates are effective tools for increasing the precision of natural language requirements and for avoiding ambiguities that may arise from the use of unrestricted natural language. When templates are applied, it is important to verify that the requirements are indeed written according to the templates. If done manually, checking conformance to templates is laborious, presenting a particular challenge when the task has to be repeated multiple times in response to changes in the requirements. In this article, using techniques from natural language processing (NLP), we develop an automated approach for checking conformance to templates. Specifically, we present a generalizable method for casting templates into NLP pattern matchers and reflect on our practical experience implementing automated checkers for two well-known templates in the requirements engineering community. We report on the application of our approach to four case studies. Our results indicate that: (1) our approach provides a robust and accurate basis for checking conformance to templates; and (2) the effectiveness of our approach is not compromised even when the requirements glossary terms are unknown. This makes our work particularly relevant to practice, as many industrial requirements documents have incomplete glossaries.",
"title": ""
},
{
"docid": "3f36b23dd997649b8df6c7fa7fb73963",
"text": "This paper presents a virtual impedance design and implementation approach for power electronics interfaced distributed generation (DG) units. To improve system stability and prevent power couplings, the virtual impedances can be placed between interfacing converter outputs and the main grid. However, optimal design of the impedance value, robust implementation of the virtual impedance, and proper utilization of the virtual impedance for DG performance enhancement are key for the virtual impedance concept. In this paper, flexible small-signal models of microgrids in different operation modes are developed first. Based on the developed microgrid models, the desired DG impedance range is determined considering the stability, transient response, and power flow performance of DG units. A robust virtual impedance implementation method is also presented, which can alleviate voltage distortion problems caused by harmonic loads compared to the effects of physical impedances. Furthermore, an adaptive impedance concept is proposed to further improve power control performances during the transient and grid faults. Simulation and experimental results are provided to validate the impedance design approach, the virtual impedance implementation method, and the proposed adaptive transient impedance control strategies.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "f9a4cea63b2df8b0a93d1652f17ff095",
"text": "The current virtual machine(VM) resources scheduling in cloud computing environment mainly considers the current state of the system but seldom considers system variation and historical data, which always leads to load imbalance of the system. In view of the load balancing problem in VM resources scheduling, this paper presents a scheduling strategy on load balancing of VM resources based on genetic algorithm. According to historical data and current state of the system and through genetic algorithm, this strategy computes ahead the influence it will have on the system after the deployment of the needed VM resources and then chooses the least-affective solution, through which it achieves the best load balancing and reduces or avoids dynamic migration. This strategy solves the problem of load imbalance and high migration cost by traditional algorithms after scheduling. Experimental results prove that this method is able to realize load balancing and reasonable resources utilization both when system load is stable and variant.",
"title": ""
},
{
"docid": "d3f256c026125f98ccb09fd6403ee5a0",
"text": "Endocytic mechanisms control the lipid and protein composition of the plasma membrane, thereby regulating how cells interact with their environments. Here, we review what is known about mammalian endocytic mechanisms, with focus on the cellular proteins that control these events. We discuss the well-studied clathrin-mediated endocytic mechanisms and dissect endocytic pathways that proceed independently of clathrin. These clathrin-independent pathways include the CLIC/GEEC endocytic pathway, arf6-dependent endocytosis, flotillin-dependent endocytosis, macropinocytosis, circular doral ruffles, phagocytosis, and trans-endocytosis. We also critically review the role of caveolae and caveolin1 in endocytosis. We highlight the roles of lipids, membrane curvature-modulating proteins, small G proteins, actin, and dynamin in endocytic pathways. We discuss the functional relevance of distinct endocytic pathways and emphasize the importance of studying these pathways to understand human disease processes.",
"title": ""
},
{
"docid": "34979f09e25955c2cc3a2af3d29b36f1",
"text": "By making use of Artificial Intelligence (AI), Human Intelligence can be simulated by a machine, Neural Networks is one such sub field of AI. Artificial Neural Networks (ANN) consists of neurons and weights assigned to inter neuron connections helps in storing the acquired knowledge. This paper makes use of Hebbian learning rule to train the ANN of both sender and receiver machines. In the field of Public Key Cryptography (PKC), Pseudo Random Number Generator (PRNG) are widely used to generate unique keys and random numbers used in ANN which are found to possess many types of possible attacks. It is essential for a key to possess randomness for key strength and security. This paper proposes key generation for PKC by application of ANN using Genetic Algorithm (GA). It was noticed that use of ANN along with GA has not as yet been explored. GA approach is often applied for obtaining optimization and solutions in search problems. GA correlates to the nature to a large extent producing population of numbers where number possessing higher fitness value is replicated more. Thus, making GA a very good contender for PRNGs. Good Fitness function helps in exploring search space of random numbers in more efficient manner. GA PRNGs result samples satisfies frequency test and gap test. Thus the numbers generated after each iteration by GA PRNG are statistically verified to be random and nonrepeating, having no prior relation of next number from the previous ones, acting as an essential initialization parameter for neural algorithm overcomes the problem of acknowledging the random number generated by traditional PRNG. For generating public and private keys, different number of rounds of mixing is used. This ensures that the private key generated cannot be derived from public key. Our algorithm was observed to give fast and improved performance results having practical and feasible implementation.",
"title": ""
},
{
"docid": "040e5e800895e4c6f10434af973bec0f",
"text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.",
"title": ""
},
{
"docid": "3eb0a8ab4ad46d4091e7aa47bda9a6d9",
"text": "This paper presents the design, development, theoretical and measured results of Digital Beam Forming (DBF) hardware for use with a large, multi-beam conformal phased array antennas such as the Geodesic Dome Phased Array Antenna (GDPAA) shown in Figure 1[1]. The GDPAA system with DBF on receive can provide multiple simultaneous links for satellite tracking, telemetry, and command (TT&C) functions including adaptive pattern control for anti-jamming or interference suppression, and high resolution direction finding.",
"title": ""
},
{
"docid": "f75ea728332e3bc7511c5a8994c21695",
"text": "Every day, executives make decisions about pay, and they do so in a landscape that's shifting. As more and more companies base less of their compensation on straight salary and look to other financial options, managers are bombarded with advice about the best approaches to take. Unfortunately, much of that advice is wrong. Indeed, much of the conventional wisdom and public discussion about pay today is misleading, incorrect, or both. The result is that business people are adopting wrongheaded notions about how to pay people and why. In particular, they are subscribing to six dangerous myths about pay. Myth #1: labor rates are the same as labor costs. Myth #2: cutting labor rates will lower labor costs. Myth #3: labor costs represent a large portion of a company's total costs. Myth #4: keeping labor costs low creates a potent and sustainable competitive edge. Myth #5: individual incentive pay improves performance. Myth #6: people work primarily for the money. The author explains why these myths are so pervasive, shows where they go wrong, and suggests how leaders might think more productively about compensation. With increasing frequency, the author says, he sees managers harming their organizations by buying into--and acting on--these myths. Those that do, he warns, are probably doomed to endless tinkering with pay that at the end of the day will accomplish little but cost a lot.",
"title": ""
},
{
"docid": "1177dc9bc616ef4221a7db7722b58a6c",
"text": "The typical septum polarizer that has gained popularity in the literature may be unsuitable for high-power applications, due to the sharp corners in the design. In order to address this issue, the fundamentals of the septum operation are first revisited, using a graphical visualization through full-wave analysis. A septum profiled with smooth edges is next presented, with enhanced power-handling capabilities in comparison to the stepped-septum polarizer. In this work, the sigmoid function is introduced to represent the smooth contour of the septum, and to enable diverse configurations without any discontinuities. The smooth and stepped profiles are optimized using the Particle Swarm Optimization (PSO) technique. The maximum electric-field intensity around the smooth edges is investigated using a full-wave simulator, HFSS. Our observations show that the maximum electric field is reduced by 40% in comparison to the stepped septum. In Appendix 1, the numerical approach is evaluated by comparing the exact series solution for the half-plane scattering problem with the simulated results in HFSS. In Appendix 2, a septum design with rounded edges is also studied as another possible design to reduce the maximum fields.",
"title": ""
},
{
"docid": "5ad4560383ab74545c494ee722b1c57c",
"text": "In this paper, a sub-dictionary based sparse coding method is proposed for image representation. The novel sparse coding method substitutes a new regularization item for L1-norm in the sparse representation model. The proposed sparse coding method involves a series of sub-dictionaries. Each sub-dictionary contains all the training samples except for those from one particular category. For the test sample to be represented, all the sub-dictionaries should linearly represent it apart from the one that does not contain samples from that label, and this sub-dictionary is called irrelevant sub-dictionary. This new regularization item restricts the sparsity of each sub-dictionary's residual, and this restriction is helpful for classification. The experimental results demonstrate that the proposed method is superior to the previous related sparse representation based classification.",
"title": ""
},
{
"docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c",
"text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.",
"title": ""
},
{
"docid": "ebcff53d86162e30c43b58ae03e786a0",
"text": "The adjustment of probabilistic models for sentiment analysis to changes in language use and the perception of products can be realized via incremental learning techniques. We provide a free, open and GUI-based sentiment analysis tool that allows for a) relabeling predictions and/or adding labeled instances to retrain the weights of a given model, and b) customizing lexical resources to account for false positives and false negatives in sentiment lexicons. Our results show that incrementally updating a model with information from new and labeled instances can substantially increase accuracy. The provided solution can be particularly helpful for gradually refining or enhancing models in an easily accessible fashion while avoiding a) the costs for training a new model from scratch and b) the deterioration of prediction accuracy over time.",
"title": ""
},
{
"docid": "c50230c77645234564ab51a11fcf49d1",
"text": "We present an image set classification algorithm based on unsupervised clustering of labeled training and unlabeled test data where labels are only used in the stopping criterion. The probability distribution of each class over the set of clusters is used to define a true set based similarity measure. To this end, we propose an iterative sparse spectral clustering algorithm. In each iteration, a proximity matrix is efficiently recomputed to better represent the local subspace structure. Initial clusters capture the global data structure and finer clusters at the later stages capture the subtle class differences not visible at the global scale. Image sets are compactly represented with multiple Grassmannian manifolds which are subsequently embedded in Euclidean space with the proposed spectral clustering algorithm. We also propose an efficient eigenvector solver which not only reduces the computational cost of spectral clustering by many folds but also improves the clustering quality and final classification results. Experiments on five standard datasets and comparison with seven existing techniques show the efficacy of our algorithm.",
"title": ""
},
{
"docid": "e5c625ceaf78c66c2bfb9562970c09ec",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
},
{
"docid": "80b041b8712436474a200c5b5ed3aeb2",
"text": "Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp.",
"title": ""
}
] |
scidocsrr
|
1cb62e21453954328aba1e8868b4a6e0
|
Control of Micro Mirrors for High Precision Performance
|
[
{
"docid": "d61ff7159a1559ec2c4be9450c1ad3b6",
"text": "This paper presents the control of an underactuated two-link robot called the Pendubot. We propose a controller for swinging the linkage and rise it to its uppermost unstable equilibrium position. The balancing control is based on an energy approach and the passivity properties of the system.",
"title": ""
}
] |
[
{
"docid": "a61f2e71e0b68d8f4f79bfa33c989359",
"text": "Model-based testing relies on behavior models for the generation of model traces: input and expected output---test cases---for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.",
"title": ""
},
{
"docid": "496864f6ccafbc23e52d8cead505eac7",
"text": "Hotel guests’ expectations and actual experiences on hotel service quality often fail to coincide due to guests’ unusually high anticipations, hotels’ complete breakdowns in delivering their standard, or the combination of both. Moreover, this disconfirmation could be augmented contingent upon the level of hotel segment (hotel star-classification) and the overall rating manifested by previous guests. By incorporating a 2 2 matrix design in which a hotel star-classification configures one dimension (2 versus 4 stars) and a customers’ overall rating (lower versus higher overall ratings) configures the other, this explorative multiple case study uses conjoint analyses to examine the differences in the comparative importance of the six hotel attributes (value, location, sleep quality, rooms, cleanliness, and service) among four prominent hotel chain brands located in the United States. Four major and eight minor propositions are suggested for future empirical research based on the results of the four combined studies. Through the analysis of online data, this study may enlighten hotel managers with various ways to accommodate hotel guests’ needs. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "484a7acba548ef132d83fc9931a45071",
"text": "This paper is focused on tracking control for a rigid body payload, that is connected to an arbitrary number of quadrotor unmanned aerial vehicles via rigid links. An intrinsic form of the equations of motion is derived on the nonlinear configuration manifold, and a geometric controller is constructed such that the payload asymptotically follows a given desired trajectory for its position and attitude. The unique feature is that the coupled dynamics between the rigid body payload, links, and quadrotors are explicitly incorporated into control system design and stability analysis. These are developed in a coordinate-free fashion to avoid singularities and complexities that are associated with local parameterizations. The desirable features of the proposed control system are illustrated by a numerical example.",
"title": ""
},
{
"docid": "1dc5a78a3a9c072f1f71da4aa257d3f2",
"text": "A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with Bayesian networks o er an e cient and principled approach for avoiding the over tting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate Bayesian-network methods for learning to techniques for supervised and unsupervised learning. We illustrate the graphical-modeling approach using a real-world case study.",
"title": ""
},
{
"docid": "2cd2a85598c0c10176a34c0bd768e533",
"text": "BACKGROUND\nApart from skills, and knowledge, self-efficacy is an important factor in the students' preparation for clinical work. The Physiotherapist Self-Efficacy (PSE) questionnaire was developed to measure physical therapy (TP) students' self-efficacy in the cardiorespiratory, musculoskeletal, and neurological clinical areas. The aim of this study was to establish the measurement properties of the Dutch PSE questionnaire, and to explore whether self-efficacy beliefs in students are clinical area specific.\n\n\nMETHODS\nMethodological quality of the PSE was studied using COSMIN guidelines. Item analysis, structural validity, and internal consistency of the PSE were determined in 207 students. Test-retest reliability was established in another sample of 60 students completing the PSE twice. Responsiveness of the scales was determined in 80 students completing the PSE at the start and the end of the second year. Hypothesis testing was used to determine construct validity of the PSE.\n\n\nRESULTS\nExploratory factor analysis resulted in three meaningful components explaining similar proportions of variance (25%, 21%, and 20%), reflecting the three clinical areas. Internal consistency of each of the three subscales was excellent (Cronbach's alpha > .90). Intra Class Correlation Coefficient was good (.80). Hypothesis testing confirmed construct validity of the PSE.\n\n\nCONCLUSION\nThe PSE shows excellent measurement properties. The component structure of the PSE suggests that self-efficacy about physiotherapy in PT students is not generic, but specific for a clinical area. As self-efficacy is considered a predictor of performance in clinical settings, enhancing self-efficacy is an explicit goal of educational interventions. Further research is needed to determine if the scale is specific enough to assess the effect of educational interventions on student self-efficacy.",
"title": ""
},
{
"docid": "06b81ec29ee26f13720891eea9f902df",
"text": "This paper reports the design of a Waveguide-Fed Cavity Backed Slot Antenna Array in Ku-band. The antenna is made entire via simple milling process. The overall antenna structure consists of 3 layers. The bottom layer is a waveguide feed network to provide corporate power division. In turn, the waveguide feed network is fed by a conventional SMA connector from the back. The waveguide network couples energy to an array of cavity via an aperture. This constitutes the middle layer. Each cavity then excites an array of 2×2 radiating slots in the top layer. The radiating slot elements and the feed network are designed to achieve wide bandwidth and gain performance. Finally, an 8×8 array antenna is designed with about 25dBi gain and bandwidth of 1.6GHz in the Ku-band.",
"title": ""
},
{
"docid": "86d3adcf5b0cf9d86e139d3f1cef5158",
"text": "Heatmap regression has been used for landmark localization for quite a while now. Most of the methods use a very deep stack of bottleneck modules for heatmap classification stage, followed by heatmap regression to extract the keypoints. In this paper, we present a single dendritic CNN, termed as Pose Conditioned Dendritic Convolution Neural Network (PCD-CNN), where a classification network is followed by a second and modular classification network, trained in an end to end fashion to obtain accurate landmark points. Following a Bayesian formulation, we disentangle the 3D pose of a face image explicitly by conditioning the landmark estimation on pose, making it different from multi-tasking approaches. Extensive experimentation shows that conditioning on pose reduces the localization error by making it agnostic to face pose. The proposed model can be extended to yield variable number of landmark points and hence broadening its applicability to other datasets. Instead of increasing depth or width of the network, we train the CNN efficiently with Mask-Softmax Loss and hard sample mining to achieve upto 15% reduction in error compared to state-of-the-art methods for extreme and medium pose face images from challenging datasets including AFLW, AFW, COFW and IBUG.",
"title": ""
},
{
"docid": "ef39209e61597136d5a954c70fcecbfe",
"text": "We introduce the Android Security Framework (ASF), a generic, extensible security framework for Android that enables the development and integration of a wide spectrum of security models in form of code-based security modules. The design of ASF reflects lessons learned from the literature on established security frameworks (such as Linux Security Modules or the BSD MAC Framework) and intertwines them with the particular requirements and challenges from the design of Android's software stack. ASF provides a novel security API that supports authors of Android security extensions in developing their modules. This overcomes the current unsatisfactory situation to provide security solutions as separate patches to the Android software stack or to embed them into Android's mainline codebase. This system security extensibility is of particular benefit for enterprise or government solutions that require deployment of advanced security models, not supported by vanilla Android. We present a prototypical implementation of ASF and demonstrate its effectiveness and efficiency by modularizing different security models from related work, such as dynamic permissions, inlined reference monitoring, and type enforcement.",
"title": ""
},
{
"docid": "1e07b6f78a6c61dd6dc1a9546f8cd2be",
"text": "Most hardware neural networks have a basic competitive learning rule on top of a more involved processing algorithm. This work highlights two basic learning rules/behavior: winner-take-all (WTA) and spike-timing-dependent plasticity (STDP). It also gives a design example implementing WTA combined with STDP in a position detector. A complementary metal-oxide-semiconductor (CMOS) and a memristor-MOS technology (MMOST) design simulation results are compared on the bases of power, area, and noise handling capabilities. Design and layout were done in 130-nm IBM process for CMOS, and the HSPICE model files for the process were used to simulate the CMOS part of the MMOST design. CMOS consumes area, 55-W max power, and requires a 3-dB SNR. On the other hand, the MMOST design consumes , 15-W max power, and requires a 4.8-dB SNR. There is a potential to improve upon analog computing with the adoption of MMOST designs.",
"title": ""
},
{
"docid": "d29c4e8598bbe2406ae314402f200f41",
"text": "A big step forward to improve power system monitoring and performance, continued load growth without a corresponding increase in transmission resources has resulted in reduced operational margins for many power systems worldwide and has led to operation of power systems closer to their stability limits and to power exchange in new patterns. These issues, as well as the on-going worldwide trend towards deregulation of the entire industry on the one hand and the increased need for accurate and better network monitoring on the other hand, force power utilities exposed to this pressure to demand new solutions for wide area monitoring, protection and control. Wide-area monitoring, protection, and control require communicating the specific-node information to a remote station but all information should be time synchronized so that to neutralize the time difference between information. It gives a complete simultaneous snap shot of the power system. The conventional system is not able to satisfy the time-synchronized requirement of power system. Phasor Measurement Unit (PMU) is enabler of time-synchronized measurement, it communicate the synchronized local information to remote station.",
"title": ""
},
{
"docid": "329487a07d4f71e30b64da5da1c6684a",
"text": "The purpose was to investigate the effect of 25 weeks heavy strength training in young elite cyclists. Nine cyclists performed endurance training and heavy strength training (ES) while seven cyclists performed endurance training only (E). ES, but not E, resulted in increases in isometric half squat performance, lean lower body mass, peak power output during Wingate test, peak aerobic power output (W(max)), power output at 4 mmol L(-1)[la(-)], mean power output during 40-min all-out trial, and earlier occurrence of peak torque during the pedal stroke (P < 0.05). ES achieved superior improvements in W(max) and mean power output during 40-min all-out trial compared with E (P < 0.05). The improvement in 40-min all-out performance was associated with the change toward achieving peak torque earlier in the pedal stroke (r = 0.66, P < 0.01). Neither of the groups displayed alterations in VO2max or cycling economy. In conclusion, heavy strength training leads to improved cycling performance in elite cyclists as evidenced by a superior effect size of ES training vs E training on relative improvements in power output at 4 mmol L(-1)[la(-)], peak power output during 30-s Wingate test, W(max), and mean power output during 40-min all-out trial.",
"title": ""
},
{
"docid": "a87da46ab4026c566e3e42a5695fd8c9",
"text": "Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.",
"title": ""
},
{
"docid": "3792c6e065227cdbe8a9f87882224891",
"text": "The increasing size of workloads has led to the development of new technologies and architectures that are intended to help address the capacity limitations of DRAM main memories. The proposed solutions fall into two categories: those that re-engineer Flash-based SSDs to further improve storage system performance and those that incorporate non-volatile technology into a Hybrid main memory system. These developments have blurred the line between the storage and memory systems. In this paper, we examine the differences between these two approaches to gain insight into the types of applications and memory technologies that benefit the most from these different architectural approaches.\n In particular this work utilizes full system simulation to examine the impact of workload randomness on system performance, the impact of backing store latency on system performance, and how the different implementations utilize system resources differently. We find that the software overhead incurred by storage based implementations can account for almost 50% of the overall access latency. As a result, backing store technologies that have an access latency up to 25 microseconds tend to perform better when implemented as part of the main memory system. We also see that high degrees of random access can exacerbate the software overhead problem and lead to large performance advantages for the Hybrid main memory approach. Meanwhile, the page replacement algorithm utilized by the OS in the storage approach results in considerably better performance on highly sequential workloads at the cost of greater pressure on the cache.",
"title": ""
},
{
"docid": "515519cc7308477e1c38a74c4dd720f0",
"text": "The objective of cosmetic surgery is increased patient self-esteem and confidence. Most patients undergoing a procedure report these results post-operatively. The success of any procedure is measured in patient satisfaction. In order to optimize patient satisfaction, literature suggests careful pre-operative patient preparation including a discussion of the risks, benefits, limitations and expected results for each procedure undertaken. As a general rule, the patients that are motivated to surgery by a desire to align their outward appearance to their body-image tend to be the most satisfied. There are some psychiatric conditions that can prevent a patient from being satisfied without regard aesthetic success. The most common examples are minimal defect/Body Dysmorphic Disorder, the patient in crisis, the multiple revision patient, and loss of identity. This paper will familiarize the audience with these conditions, symptoms and related illnesses. Case examples are described and then explored in terms of the conditions presented. A discussion of the patient’s motivation for surgery, goals pertaining to specific attributes, as well as an evaluation of the patient’s understanding of the risks, benefits, and limitations of the procedure can help the physician determine if a patient is capable of being satisfied with a cosmetic plastic surgery procedure. Plastic surgeons can screen patients suffering from these conditions relatively easily, as psychiatry is an integral part of medical school education. If a psychiatric referral is required, then the psychiatrist needs to be aware of the nuances of each of these conditions.",
"title": ""
},
{
"docid": "023302562ddfe48ac81943fedcf881b7",
"text": "Knitty is an interactive design system for creating knitted animals. The user designs a 3D surface model using a sketching interface. The system automatically generates a knitting pattern and then visualizes the shape of the resulting 3D animal model by applying a simple physics simulation. The user can see the resulting shape before beginning the actual knitting. The system also provides a production assistant interface for novices. The user can easily understand how to knit each stitch and what to do in each step. In a workshop for novices, we observed that even children can design their own knitted animals using our system.",
"title": ""
},
{
"docid": "be49c21abb971f31690fce9dc553e54b",
"text": "In last decade, various agile methods have been introduced and used by software industry. It has been observed that many practitioners are using hybrid of agile methods and traditional methods. The knowledge of agile software development process about the theoretical grounds, applicability in large development settings and connections to establish software engineering disciplines remain mostly in dark. It has been reported that it is difficult for average manager to implement agile method in the organization. Further, every agile method has its own development cycle that brings technological, managerial and environmental changes in organization. A proper roadmap of agile software development in the form of agile software development life cycle can be developed to address the aforesaid issues of agile software development process. Thus, there is strong need of agile software development life cycle that clearly defines the phases included in any agile method and also describes the artifacts of each phase. This generalization of agile software development life cycle provides the guideline for average developers about usability, suitability, applicability of agile methods. Keywords-Agile software Development; extreme Programming; Adaptive software developmen; Scrum; Agile Method;story.",
"title": ""
},
{
"docid": "6cddde477f66fd4511da84f4219f058d",
"text": "Variational Autoencoder (VAE) has achieved promising success since its emergence. In recent years, its various variants have been developed, especially those works which extend VAE to handle sequential data [1, 2, 5, 7]. However, these works either do not generate sequential latent variables, or encode latent variables only based on inputs from earlier time-steps. We believe that in real-world situations, encoding latent variables at a specific time-step should be based on not only previous observations, but also succeeding samples. In this work, we emphasize such fact and theoretically derive the bidirectional Long Short-Term Memory Variational Autoencoder (bLSTM-VAE), a novel variant of VAE whose encoders and decoders are implemented by bidirectional Long Short-Term Memory (bLSTM) networks. The proposed bLSTM-VAE can encode sequential inputs as an equal-length sequence of latent variables. A latent variable at a specific time-step is encoded by simultaneously processing observations from the first time-step till current time-step in a forward order and observations from current time-step till the last timestep in a backward order. As a result, we consider that the proposed bLSTM-VAE could learn latent variables reliably by mining the contextual information from the whole input sequence. In order to validate the proposed method, we apply it for gesture recognition using 3D skeletal joint data. The evaluation is conducted on the ChaLearn Look at People gesture dataset and NTU RGB+D dataset. The experimental results show that combining with the proposed bLSTM-VAE, the classification network performs better than when combining with a standard VAE, and also outperforms several state-of-the-art methods.",
"title": ""
},
{
"docid": "18a317b8470b4006ccea0e436f54cfcd",
"text": "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.",
"title": ""
},
{
"docid": "b54ca99ae8818517d5c04100bad0f3b4",
"text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo starkeynature@hotmail.com Liqun Qi liqun.qi@polyu.edu.hk Naihua Xiu nhxiu@bjtu.edu.cn 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy",
"title": ""
},
{
"docid": "281a9d0c9ad186c1aabde8c56c41cefa",
"text": "Hardware manipulations pose a serious threat to numerous systems, ranging from a myriad of smart-X devices to military systems. In many attack scenarios an adversary merely has access to the low-level, potentially obfuscated gate-level netlist. In general, the attacker possesses minimal information and faces the costly and time-consuming task of reverse engineering the design to identify security-critical circuitry, followed by the insertion of a meaningful hardware Trojan. These challenges have been considered only in passing by the research community. The contribution of this work is threefold: First, we present HAL, a comprehensive reverse engineering and manipulation framework for gate-level netlists. HAL allows automating defensive design analysis (e.g., including arbitrary Trojan detection algorithms with minimal effort) as well as offensive reverse engineering and targeted logic insertion. Second, we present a novel static analysis Trojan detection technique ANGEL which considerably reduces the false-positive detection rate of the detection technique FANCI. Furthermore, we demonstrate that ANGEL is capable of automatically detecting Trojans obfuscated with DeTrust. Third, we demonstrate how a malicious party can semi-automatically inject hardware Trojans into third-party designs. We present reverse engineering algorithms to disarm and trick cryptographic self-tests, and subtly leak cryptographic keys without any a priori knowledge of the design’s internal workings.",
"title": ""
}
] |
scidocsrr
|
eebfaeb2186c10073c128dcf011c59b0
|
Effects of mindfulness-based stress reduction on emotional experience and expression: a randomized controlled trial.
|
[
{
"docid": "b5360df245a0056de81c89945f581f14",
"text": "The inability to cope successfully with the enormous stress of medical education may lead to a cascade of consequences at both a personal and professional level. The present study examined the short-term effects of an 8-week meditation-based stress reduction intervention on premedical and medical students using a well-controlled statistical design. Findings indicate that participation in the intervention can effectively (1) reduce self-reported state and trait anxiety, (2) reduce reports of overall psychological distress including depression, (3) increase scores on overall empathy levels, and (4) increase scores on a measure of spiritual experiences assessed at termination of intervention. These results (5) replicated in the wait-list control group, (6) held across different experiments, and (7) were observed during the exam period. Future research should address potential long-term effects of mindfulness training for medical and premedical students.",
"title": ""
},
{
"docid": "852b4c7b434937299a82c4b8aa3f264e",
"text": "Baer's review (2003; this issue) suggests that mindfulness-based interventions are clinically efficacious, but that better designed studies are now needed to substantiate the field and place it on a firm foundation for future growth. Her review, coupled with other lines of evidence, suggests that interest in incorporating mindfulness into clinical interventions in medicine and psychology is growing. It is thus important that professionals coming to this field understand some of the unique factors associated with the delivery of mindfulness-based interventions and the potential conceptual and practical pitfalls of not recognizing the features of this broadly unfamiliar landscape. This commentary highlights and contextualizes (1) what exactly mindfulness is, (2) where it came from, (3) how it came to be introduced into medicine and health care, (4) issues of cross-cultural sensitivity and understanding in the study of meditative practices stemming from other cultures and in applications of them in novel settings, (5) why it is important for people who are teaching mind-fulness to practice themselves, (6) results from 3 recent Health Care, and Society not reviewed by Baer but which raise a number of key questions about clinical applicability , study design, and mechanism of action, and (7) current opportunities for professional training and development in mindfulness and its clinical applications. Iappreciate the opportunity to comment on Baer's (2003; this issue) review of mindfulness training as clinical intervention and to add my own reflections on the emergence of mindfulness in a clinical context, especially in a journal explicitly devoted to both science and practice. The universe of mindfulness 1 brings with it a whole new meaning and thrust to the word practice, one which I believe has the potential to contribute profoundly to the further development of the field of clinical psychology and its allied disciplines , behavioral medicine, psychosomatic medicine, and health psychology, through both a broadening of research approaches to mind/body interactions and the development of new classes of clinical interventions. I find the Baer review to be evenhanded, cogent, and perceptive in its description and evaluation of the work that has been published through the middle of 2001, work that features mindfulness training as the primary element in various clinical interventions. It complements nicely the recent review by Bishop (2002), which to my mind ignores some of the most important, if difficult to define, features of such interventions in its emphasis on the perceived need",
"title": ""
},
{
"docid": "05b9ec9f105287fd8091cb79478da6bc",
"text": "There has been great interest in determining if mindfulness can be cultivated and if this cultivation leads to well-being. The current study offers preliminary evidence that at least one aspect of mindfulness, measured by the Mindful Attention and Awareness Scale (MAAS; K. W. Brown & R. M. Ryan, 2003), can be cultivated and does mediate positive outcomes. Further, adherence to the practices taught during the meditation-based interventions predicted positive outcomes. College undergraduates were randomly allocated between training in two distinct meditation-based interventions, Mindfulness Based Stress Reduction (MBSR; J. Kabat-Zinn, 1990; n=15) and E. Easwaran's (1978/1991) Eight Point Program (EPP; n=14), or a waitlist control (n=15). Pretest, posttest, and 8-week follow-up data were gathered on self-report outcome measures. Compared to controls, participants in both treatment groups (n=29) demonstrated increases in mindfulness at 8-week follow-up. Further, increases in mindfulness mediated reductions in perceived stress and rumination. These results suggest that distinct meditation-based practices can increase mindfulness as measured by the MAAS, which may partly mediate benefits. Implications and future directions are discussed.",
"title": ""
}
] |
[
{
"docid": "bedc7de2ede206905e89daf61828f868",
"text": "Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.",
"title": ""
},
{
"docid": "c91cc6de1e26d9ac9b5ba03ba67fa9b9",
"text": "As in most of the renewable energy sources it is not possible to generate high voltage directly, the study of high gain dc-dc converters is an emerging area of research. This paper presents a high step-up dc-dc converter based on current-fed Cockcroft-Walton multiplier. This converter not only steps up the voltage gain but also eliminates the use of high frequency transformer which adds to cost and design complexity. N-stage Cockcroft-Walton has been utilized to increase the voltage gain in place of a transformer. This converter also provides dual input operation, interleaved mode and maximum power point tracking control (if solar panel is used as input). This converter is utilized for resistive load and a pulsed power supply and the effect is studied in high voltage application. Simulation has been performed by designing a converter of 450 W, 400 V with single source and two stage of Cockcroft-Walton multiplier and interleaved mode of operation is performed. Design parameters as well as simulation results are presented and verified in this paper.",
"title": ""
},
{
"docid": "23a21e2d967c8fb8ccc5d282c597ff06",
"text": "Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.",
"title": ""
},
{
"docid": "9ca90172c5beff5922b4f5274ef61480",
"text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.",
"title": ""
},
{
"docid": "5288f4bbc2c9b8531042ce25b8df05b0",
"text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.",
"title": ""
},
{
"docid": "eea4f0555cdf4050bdb4681c7a50c01d",
"text": "In this paper, a review on condition monitoring of induction motors is first presented. Then, an ensemble of hybrid intelligent models that is useful for condition monitoring of induction motors is proposed. The review covers two parts, i.e., (i) a total of nine commonly used condition monitoring methods of induction motors; and (ii) intelligent learning models for condition monitoring of induction motors subject to single and multiple input signals. Based on the review findings, the Motor Current Signature Analysis (MCSA) method is selected for this study owing to its online, non-invasive properties and its requirement of only single input source; therefore leading to a cost-effective condition monitoring method. A hybrid intelligent model that consists of the Fuzzy Min-Max (FMM) neural network and the Random Forest (RF) model comprising an ensemble of Classification and Regression Trees is developed. The majority voting scheme is used to combine the predictions produced by the resulting FMM-RF ensemble (or FMM-RFE) members. A benchmark problem is first deployed to evaluate the usefulness of the FMM-RFE model. Then, the model is applied to condition monitoring of induction motors using a set of real data samples. Specifically, the stator current signals of induction motors are obtained using the MCSA method. The signals are processed to produce a set of harmonic-based features for classification using the FMM-RFE model. The experimental results show good performances in both noise-free and noisy environments. More importantly, a set of explanatory rules in the form of a decision tree can be extracted from the FMM-RFE model to justify its predictions. The outcomes ascertain the effectiveness of the proposed FMM-RFE model in undertaking condition monitoring tasks, especially for induction motors, under different environments.",
"title": ""
},
{
"docid": "81516aee88847e21f3d629d242d2c851",
"text": "Recently identified cellular and molecular correlates of stress-induced plasticity suggest a putative link between neuronal remodeling in the amygdala and the development of anxiety-like behavior. Rodent models of immobilization stress, applied for 10 consecutive days, have been reported to enhance anxiety, and also cause dendritic elongation and spine formation in the basolateral amygdala (BLA). Paradoxically, longer exposure to stress, which is also anxiogenic, fails to affect key molecular markers of neuronal remodeling in the BLA. This has raised the possibility of homeostatic mechanisms being triggered by more prolonged stress that could potentially dampen the morphological effects of stress in the BLA. Therefore, we examined the cellular and behavioral impact of increasing the duration of stress in rats. We find that prolonged immobilization stress (PIS), spanning 21 days, caused significant enhancement in dendritic arborization of spiny BLA neurons. Spine density was also enhanced along these elongated dendrites in response to PIS. Finally, this striking increase in synaptic connectivity was accompanied by enhanced anxiety-like behavior in the elevated plus-maze. Thus, we did not detect any obvious morphological correlate of adaptive changes within the BLA that may have been activated by prolonged and repeated application of the same stressor for 21 days. These findings add to accumulating evidence that structural encoding of aversive experiences, through enhanced availability of postsynaptic dendritic surface and synaptic inputs on principal neurons of the BLA, may contribute to the affective symptoms of stress disorders.",
"title": ""
},
{
"docid": "e494d15ae21e10833140a2bd407be4bc",
"text": "Storytelling with data is becoming an important component of many fields such as graphic design, the advocacy of causes, and journalism. New techniques for integrating data visualization into narrative stories have now become commonplace. Authors are enabling new reader experiences, such as linking textual narrative and data visualizations through dynamic queries embedded in the text. Novel means of communicating position and navigating within the narrative also have emerged, such as utilizing scrolling to advance narration and initiate animations. We advance the study of narrative visualization through an analysis of a curated collection of recent data-driven stories shared on the web. Drawing from the results of this analysis, we present a set of techniques being employed in these examples, organized under four high-level categories that help authors to tell stories in creative ways: communicating narrative and explaining data, linking separated story elements, enhancing structure and navigation, and providing controlled exploration. We describe the benefits of each storytelling technique along with a number of example applications of the ideas through recent data-driven stories. Additionally, we discuss the trends we observed as well as how the field has evolved and grown. Finally, we conclude with a discussion of areas for future research.",
"title": ""
},
{
"docid": "1f95990d19f1be43de3ad4604beb39c6",
"text": "Fibers and yarns based on carbon nanotubes (CNT) are emerging as a possible improvement over more traditional high strength carbon fibers used as reinforcement elements in composite materials. This is driven by a desire to translate the exceptional mechanical properties of individual CNT shells to achieve high performance macroscopic fibers and yarns. One of the central limitations in this approach is the weak shear interactions between adjacent CNT shells and tubes within macroscopic fibers and yarns. Furthermore, the multiple levels of interaction, e.g., between tubes within a multi-walled CNTor between bundles within a fiber, compound the problem. One promising direction to overcome this limitation is the introduction of strong and stiff cross-linking bonds between adjacent carbon shells. A great deal of research has been devoted to studying such cross-linking by the irradiation of CNT based materials using either high energy particles, such as electrons, to directly covalently cross-link CNTs, or electromagnetic irradiation, such as gamma rays to strengthen polymer cross-links between CNT shells and tubes. Here we review recent progress in the field of irradiation-induced cross-linking at multiple levels in CNT based fibers with a focus on mechanical property improvements. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b01fbfbe98960e81359c73009a06f5bb",
"text": "Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained endto-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.",
"title": ""
},
{
"docid": "22ab8eb2b8eaafb2ee72ea0ed7148ca4",
"text": "As travel is taking more significant part in our life, route recommendation service becomes a big business and attracts many major players in IT industry. Given a pair of user-specified origin and destination, a route recommendation service aims to provide users with the routes of best travelling experience according to criteria, such as travelling distance, travelling time, traffic condition, etc. However, previous research shows that even the routes recommended by the big-thumb service providers can deviate significantly from the routes travelled by experienced drivers. It means travellers' preferences on route selection are influenced by many latent and dynamic factors that are hard to model exactly with pre-defined formulas. In this work we approach this challenging problem with a very different perspective- leveraging crowds' knowledge to improve the recommendation quality. In this light, CrowdPlanner - a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidate routes recommended by different sources and methods, and determine the best route based on their feedbacks. In this paper, we particularly focus on two important issues that affect system performance significantly: (1) how to efficiently generate tasks which are simple to answer but possess sufficient information to derive user-preferred routes; and (2) how to quickly identify a set of appropriate domain experts to answer the questions timely and accurately. Specifically, the task generation component in our system generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer. In addition, the worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy. A prototype system has been deployed to many voluntary mobile clients and extensive tests on real-scenario queries have shown the superiority of CrowdPlanner in comparison with the results given by map services and popular route mining algorithms.",
"title": ""
},
{
"docid": "bbe3551f2ed95dc2ca08dcff67186fba",
"text": "A high-dimensional shape transformation posed in a mass-preserving framework is used as a morphological signature of a brain image. Population differences with complex spatial patterns are then determined by applying a nonlinear support vector machine (SVM) pattern classification method to the morphological signatures. Significant reduction of the dimensionality of the morphological signatures is achieved via wavelet decomposition and feature reduction methods. Applying the method to MR images with simulated atrophy shows that the method can correctly detect subtle and spatially complex atrophy, even when the simulated atrophy represents only a 5% variation from the original image. Applying this method to actual MR images shows that brains can be correctly determined to be male or female with a successful classification rate of 97%, using the leave-one-out method. This proposed method also shows a high classification rate for old adults' age classification, even under difficult test scenarios. The main characteristic of the proposed methodology is that, by applying multivariate pattern classification methods, it can detect subtle and spatially complex patterns of morphological group differences which are often not detectable by voxel-based morphometric methods, because these methods analyze morphological measurements voxel-by-voxel and do not consider the entirety of the data simultaneously.",
"title": ""
},
{
"docid": "4a1a9504603177613cbc51c427de39d0",
"text": "A novel and low-cost embedded hardware architecture for real-time refocusing based on a standard plenoptic camera is presented in this study. The proposed layout design synthesizes refocusing slices directly from micro images by omitting the process for the commonly used sub-aperture extraction. Therefore, intellectual property cores, containing switch controlled Finite Impulse Response (FIR) filters, are developed and applied to the Field Programmable Gate Array (FPGA) XC6SLX45 from Xilinx. Enabling the hardware design to work economically, the FIR filters are composed of stored product as well as upsampling and interpolation techniques in order to achieve an ideal relation between image resolution, delay time, power consumption and the demand of logic gates. The video output is transmitted via High-Definition Multimedia Interface (HDMI) with a resolution of 720p at a frame rate of 60 fps conforming to the HD ready standard. Examples of the synthesized refocusing slices are presented.",
"title": ""
},
{
"docid": "cdca91b002e90e463a6a159a200844b8",
"text": "For many years, stainless steel, cobalt-chromium, and titanium alloys have been the primary biomaterials used for load-bearing applications. However, as the need for structural materials in temporary implant applications has grown, materials that provide short-term structural support and can be reabsorbed into the body after healing are being sought. Since traditional metallic biomaterials are biocompatible but not biodegradable, the potential for magnesium-based alloys, which are biodegradable and bioabsorbable, in biomedical applications has gained more interest. Biodegradable and bioabsorbable magnesium-based alloys provide a number of benefits over traditional permanent implants. This paper summarizes the history and current status of magnesium as a bioabsorbable implant material. Also discussed is the development of a magnesium-zinc-calcium alloy that demonstrates promising degradation behavior relative to a commercially available Mg and magnesium-aluminum-zinc alloy.",
"title": ""
},
{
"docid": "37426a6261243f5bbe6d59be3826a82f",
"text": "A key to successful face recognition is accurate and reliable face alignment using automatically-detected facial landmarks. Given this strong dependency between face recognition and facial landmark detection, robust face recognition requires knowledge of when the facial landmark detection algorithm succeeds and when it fails. Facial landmark confidence represents this measure of success. In this paper, we propose two methods to measure landmark detection confidence: local confidence based on local predictors of each facial landmark, and global confidence based on a 3D rendered face model. A score fusion approach is also introduced to integrate these two confidences effectively. We evaluate both confidence metrics on two datasets for face recognition: JANUS CS2 and IJB-A datasets. Our experiments show up to 9% improvements when face recognition algorithm integrates the local-global confidence metrics.",
"title": ""
},
{
"docid": "114381e33d6c08724057e3116952dafc",
"text": "We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. The platform contains a rich suite of low-level and high-level natural language processing technologies: automatic speech recognition of broadcast media, machine translation, automated tagging and classification of named entities, semantic parsing to detect relationships between entities, and automatic construction / augmentation of factual knowledge bases. Implemented on the Docker platform, it can easily be deployed, customised, and scaled to large volumes of incoming media streams.",
"title": ""
},
{
"docid": "4e8fc4ca98ca0498885c45a683cea282",
"text": "Recent architectures for the advanced metering infrastructure (AMI) have incorporated several back-end systems that handle billing and other smart grid control operations. The non-availability of metering data when needed or the untimely delivery of data needed for control operations will undermine the activities of these back-end systems. Unfortunately, there are concerns that cyber attacks such as distributed denial of service (DDoS) will manifest in magnitude and complexity in a smart grid AMI network. Such attacks will range from a delay in the availability of end user's metering data to complete denial in the case of a grounded network. This paper proposes a cloud-based (IaaS) firewall for the mitigation of DDoS attacks in a smart grid AMI network. The proposed firewall has the ability of not only mitigating the effects of DDoS attack but can prevent the attack before they are launched. Our proposed firewall system leverages on cloud computing technology which has an added advantage of reducing the burden of data computations and storage for smart grid AMI back-end systems. The openflow firewall proposed in this study is a better security solution with regards to the traditional on-premises DoS solutions which cannot cope with the wide range of new attacks targeting the smart grid AMI network infrastructure. Simulation results generated from the study show that our model can guarantee the availability of metering/control data and could be used to improve the QoS of the smart grid AMI network under a DDoS attack scenario.",
"title": ""
},
{
"docid": "ff939b33128e2b8d2cd0074a3b021842",
"text": "Breast cancer is the most common form of cancer among women worldwide. Ultrasound imaging is one of the most frequently used diagnostic tools to detect and classify abnormalities of the breast. Recently, computer-aided diagnosis (CAD) systems using ultrasound images have been developed to help radiologists to increase diagnosis accuracy. However, accurate ultrasound image segmentation remains a challenging problem due to various ultrasound artifacts. In this paper, we investigate approaches developed for breast ultrasound (BUS) image segmentation. In this paper, we reviewed the literature on the segmentation of BUS images according to the techniques adopted, especially over the past 10 years. By dividing into seven classes (i.e., thresholding-based, clustering-based, watershed-based, graph-based, active contour model, Markov random field and neural network), we have introduced corresponding techniques and representative papers accordingly. We have summarized and compared many techniques on BUS image segmentation and found that all these techniques have their own pros and cons. However, BUS image segmentation is still an open and challenging problem due to various ultrasound artifacts introduced in the process of imaging, including high speckle noise, low contrast, blurry boundaries, low signal-to-noise ratio and intensity inhomogeneity To the best of our knowledge, this is the first comprehensive review of the approaches developed for segmentation of BUS images. With most techniques involved, this paper will be useful and helpful for researchers working on segmentation of ultrasound images, and for BUS CAD system developers.",
"title": ""
},
{
"docid": "2d9d81869a5002e4dceb483aa78fd2f9",
"text": "We present a weakly supervised model that jointly performs both semanticand instance-segmentation – a particularly relevant problem given the substantial cost of obtaining pixel-perfect annotation for these tasks. In contrast to many popular instance segmentation approaches based on object detectors, our method does not predict any overlapping instances. Moreover, we are able to segment both “thing” and “stuff” classes, and thus explain all the pixels in the image. “Thing” classes are weakly-supervised with bounding boxes, and “stuff” with image-level tags. We obtain state-of-the-art results on Pascal VOC, for both full and weak supervision (which achieves about 95% of fullysupervised performance). Furthermore, we present the first weakly-supervised results on Cityscapes for both semanticand instance-segmentation. Finally, we use our weakly supervised framework to analyse the relationship between annotation quality and predictive performance, which is of interest to dataset creators.",
"title": ""
},
{
"docid": "3916e752fffbd121f5224a49883729d9",
"text": "Photovoltaic power plants (PVPPs) typically operate by tracking the maximum power point (MPP) in order to maximize the conversion efficiency. However, with the continuous increase of installed grid-connected PVPPs, power system operators have been experiencing new challenges, such as overloading, overvoltages, and operation during grid-voltage disturbances. Consequently, constant power generation (CPG) is imposed by grid codes. An algorithm for the calculation of the photovoltaic panel voltage reference, which generates a constant power from the PVPP, is introduced in this paper. The key novelty of the proposed algorithm is its applicability for both single- and two-stage PVPPs and flexibility to move the operation point to the right or left side of the MPP. Furthermore, the execution frequency of the algorithm and voltage increments between consecutive operating points are modified based on a hysteresis band controller in order to obtain fast dynamic response under transients and low-power oscillation during steady-state operation. The performance of the proposed algorithm for both single- and two-stage PVPPs is examined on a 50-kVA simulation setup of these topologies. Moreover, experimental results on a 1-kVA PV system validate the effectiveness of the proposed algorithm under various operating conditions, demonstrating functionalities of the proposed CPG algorithm.",
"title": ""
}
] |
scidocsrr
|
b255f6c12608f5b6e4208dd79c2121c6
|
AWL: Turning Spatial Aliasing From Foe to Friend for Accurate WiFi Localization
|
[
{
"docid": "9ad145cd939284ed77919b73452236c0",
"text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.",
"title": ""
}
] |
[
{
"docid": "a281ab54ac5b5ff85d09b773429291d3",
"text": "This article elaborates on the competencies, often referred to as 21st century competencies, that are needed to be able to live in and contribute to our current (and future) society. We begin by describing, analysing and reflecting on international frameworks describing 21st century competencies, giving special attention to digital literacy as one of the core competencies for the 21st century. This is followed by an analysis of the learning approaches that are considered appropriate for acquiring 21st century competencies, and the specific role of technology in these learning processes. Despite some consensus about what 21st century competencies are and how they can be acquired, results from international studies indicate that teaching strategies for 21st century competencies are often not well implemented in actual educational practice. The reasons for this include a lack of integration of 21st century competencies in curriculum and assessment, insufficient preparation of teachers and the absence of any systematic attention for strategies to adopt at scale innovative teaching and learning practices. The article concludes with a range of specific recommendations for the implementation of 21st century competencies.",
"title": ""
},
{
"docid": "a13c5d4c2890c1a452b8427718086292",
"text": "Correlation filter based trackers are ranked top in terms of performances. Nevertheless, they only employ a single kernel at a time. In this paper, we will derive a multi-kernel correlation filter (MKCF) based tracker which fully takes advantage of the invariance-discriminative power spectrums of various features to further improve the performance. Moreover, it may easily introduce location and representation errors to search several discrete scales for the proper one of the object bounding box, because normally the discrete candidate scales are determined and the corresponding feature pyramid are generated ahead of searching. In this paper, we will propose a novel and efficient scale estimation method based on optimal bisection search and fast evaluation of features. Our scale estimation method is the first one that uses the truly minimal number of layers of feature pyramid and avoids constructing the pyramid before searching for proper scales.",
"title": ""
},
{
"docid": "f52dca1ec4b77059639f6faf7c79746a",
"text": "We present an automatic approach to tree annotation in which basic nonterminal symbols are alternately split and merged to maximize the likelihood of a training treebank. Starting with a simple Xbar grammar, we learn a new grammar whose nonterminals are subsymbols of the original nonterminals. In contrast with previous work, we are able to split various terminals to different degrees, as appropriate to the actual complexity in the data. Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation. On the other hand, our grammars are much more compact and substantially more accurate than previous work on automatic annotation. Despite its simplicity, our best grammar achieves an F1 of 90.2% on the Penn Treebank, higher than fully lexicalized systems.",
"title": ""
},
{
"docid": "5fa9d1d666dbf147d5fbf6928e6b67e0",
"text": "We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features. We identify a simple generative process that results in the same distribution over equivalence classes, which we call the Indian buffet process. We illustrate the use of this distribution as a prior in an infinite latent feature model, deriving a Markov chain Monte Carlo algorithm for inference in this model and applying the algorithm to an image dataset.",
"title": ""
},
{
"docid": "81db5f67ccf7407aff2be6f8371bc95e",
"text": "BACKGROUND\nWe sought to estimate the national prevalence of HPV vaccine refusal and delay in a nationally-representative sample of parents of adolescents. We also compared parents who refused versus delayed HPV vaccine in terms of their vaccination beliefs and clinical communication preferences.\n\n\nMETHODS\nIn 2014 to 2015, we conducted an online survey of 1,484 US parents who reported on an 11- to 17-year-old child in their household. We used weighted multinomial logistic regression to assess correlates of HPV vaccine refusal and delay.\n\n\nRESULTS\nOverall, 28% of parents reported that they had ever \"refused or decided not to get\" HPV vaccine for their child, and an additional 8% of parents reported that they had \"delayed or put off getting\" HPV vaccine. Compared to no refusal/delay, refusal was associated with lower confidence in adolescent vaccination (relative risk ratio [RRR] = 0.66, 95% confidence interval [CI], 0.48-0.91), lower perceived HPV vaccine effectiveness (RRR = 0.68, 95% CI, 0.50-0.91), and higher perceived harms (RRR = 3.49, 95% CI, 2.65-4.60). In contrast, delay was associated with needing more information (RRR = 1.76, 95% CI, 1.08-2.85). Most parents rated physicians and information sheets as helpful for making decisions about HPV vaccination, although parents who reported refusal endorsed these resources less often.\n\n\nCONCLUSIONS\nOur findings suggest that HPV vaccine refusal is common among parents of adolescents and may have increased relative to previous estimates. Because the vaccination beliefs and communication preferences of parents who refuse appear to differ from those who delay, targeted communication strategies may be needed to effectively address HPV vaccine hesitancy.",
"title": ""
},
{
"docid": "346e160403ff9eb55c665f6cb8cca481",
"text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.",
"title": ""
},
{
"docid": "4dc5daa63bf280623914e2415bacd2a2",
"text": "The regular use of two languages by bilingual individuals has been shown to have a broad impact on language and cognitive functioning. In this monograph, we consider four aspects of this influence. In the first section, we examine differences between monolinguals and bilinguals in children’s acquisition of language and adults’ linguistic processing, particularly in terms of lexical retrieval. Children learning two languages from birth follow the same milestones for language acquisition as monolinguals do (first words, first use of grammar) but may use different strategies for language acquisition, and they generally have a smaller vocabulary in each language than do monolingual children learning only a single language. Adult bilinguals typically take longer to retrieve individual words than monolinguals do, and they generate fewer words when asked to satisfy a constraint such as category membership or initial letter. In the second section, we consider the impact of bilingualism on nonverbal cognitive processing in both children and adults. The primary effect in this case is the enhancement of executive control functions in bilinguals. On tasks that require inhibition of distracting information, switching between tasks, or holding information in mind while performing a task, bilinguals of all ages outperform comparable monolinguals. A plausible reason is that bilinguals recruit control processes to manage their ongoing linguistic performance and that these control processes become enhanced for other unrelated aspects of cognitive processing. Preliminary evidence also suggests that the executive control advantage may even mitigate cognitive decline in older age and contribute to cognitive reserve, which in turn may postpone Alzheimer’s disease. In the third section, we describe the brain networks that are responsible for language processing in bilinguals and demonstrate their involvement in nonverbal executive control for bilinguals. We begin by reviewing neuroimaging research that identifies the networks used for various nonverbal executive control tasks in the literature. These networks are used as a reference point to interpret the way in which bilinguals perform both verbal and nonverbal control tasks. The results show that bilinguals manage attention to their two language systems using the same networks that are used by monolinguals performing nonverbal tasks. In the fourth section, we discuss the special circumstances that surround the referral of bilingual children (e.g., language delays) and adults (e.g., stroke) for clinical intervention. These referrals are typically based on standardized assessments that use normative data from monolingual populations, such as vocabulary size and lexical retrieval. As we have seen, however, these measures are often different for bilinguals, both for children and adults. We discuss the implications of these linguistic differences for standardized test performance and clinical approaches. We conclude by considering some questions that have important public policy implications. What are the pros and cons of French or Spanish immersion educational programs, for example? Also, if bilingualism confers advantages in certain respects, how about three languages—do the benefits increase? In the healthcare field, how can current knowledge help in the treatment of bilingual aphasia patients following stroke? Given the recent increase in bilingualism as a research topic, answers to these and other related questions should be available in the near future.",
"title": ""
},
{
"docid": "3da64db5e0d9474eb2194e73f71e0d6c",
"text": "Standard cutaneous innervation maps show strict midline demarcation. Although authors of these maps accept variability of peripheral nerve distribution or occasionally even the midline overlap of cutaneous nerves, this concept seems to be neglected by many other anatomists. To support the statement that such transmedian overlap exists, we performed an extensive literature search and found ample evidence for all regions (head/neck, thorax/abdomen, back, perineum, and genitalia) that peripheral nerves cross the midline or communicate across the midline. This concept has substantial clinical implications, most notably in anesthesia and perineural tumor spread. This article serves as a springboard for future anatomical, clinical, and experimental research.",
"title": ""
},
{
"docid": "17ff47bb9d2aae9c70906af5a22e5e1b",
"text": "Machine learning has proven to be a powerful technique during the past decades. Artificial neural network (ANN), as one of the most popular machine learning algorithms, has been widely applied to various areas. However, their applications for catalysis were not well-studied until recent decades. In this review, we aim to summarize the applications of ANNs for catalysis research reported in the literature. We show how this powerful technique helps people address the highly complicated problems and accelerate the progress of the catalysis community. From the perspectives of both experiment and theory, this review shows how ANNs can be effectively applied for catalysis prediction, the design of new catalysts, and the understanding of catalytic structures.",
"title": ""
},
{
"docid": "2d5e013cad1112b6d09f5ef4241b9f33",
"text": "This paper presents a new optimal motion planning aiming to minimize the energy consumption of a wheeled mobile robot in robot applications. A model that can be used to formulate the energy consumption for kinetic energy transformation and for overcoming traction resistance is developed first. This model will provide a base for minimizing the robot energy consumption through a proper motion planning. To design the robot path, the A* algorithm is employed to generate an energy-efficient path where a new energy-related criterion is utilized in the cost function. To achieve a smooth trajectory along the generated path, the appropriate arrival time and velocity at the defined waypoints are selected for minimum energy consumption. Simulations and experiments are performed to demonstrate the energy-saving efficiency of the proposed motion planning approach.",
"title": ""
},
{
"docid": "22e68f5214e37b50062d4e32ca5c694a",
"text": "Present large-scale information technology environments are complex, heterogeneous compositions often affected by unpredictable behavior and poor manageability. This fostered substantial research on designs and techniques that enhance these systems with an autonomous behavior. In this survey, we focus on the self-healing branch of the research and give an overview of the current existing approaches. The survey is introduced by an outline of the origins of self-healing. Based on the principles of autonomic computing and self-adapting system research, we identify self-healing systems’ fundamental principles. The extracted principles support our analysis of the collected approaches. In a final discussion, we summarize the approaches’ common and individual characteristics. A comprehensive tabular overview of the researched material concludes the survey.",
"title": ""
},
{
"docid": "a92afc333e75b86faa5e820fcdc74abe",
"text": "The processes that cause and influence movement are one of the main points of enquiry in movement ecology. However, ecology is not the only discipline interested in movement: a number of information sciences are specialising in analysis and visualisation of movement data. The recent explosion in availability and complexity of movement data has resulted in a call in ecology for new appropriate methods that would be able to take full advantage of the increasingly complex and growing data volume. One way in which this could be done is to form interdisciplinary collaborations between ecologists and experts from information sciences that analyse movement. In this paper we present an overview of new movement analysis and visualisation methodologies resulting from such an interdisciplinary research network: the European COST Action \"MOVE - Knowledge Discovery from Moving Objects\" (http://www.move-cost.info). This international network evolved over four years and brought together some 140 researchers from different disciplines: those that collect movement data (out of which the movement ecology was the largest represented group) and those that specialise in developing methods for analysis and visualisation of such data (represented in MOVE by computational geometry, geographic information science, visualisation and visual analytics). We present MOVE achievements and at the same time put them in ecological context by exploring relevant ecological themes to which MOVE studies do or potentially could contribute.",
"title": ""
},
{
"docid": "cc976719dfc3e81c9a6b84905d7ed729",
"text": "ERP systems acceptance usually involves radical organizational change because it is often associated with fundamental organizational improvements that cut across functional and organizational boundaries. Recognizing that ERP systems involve organizational change and their implementation is overshadowed by a high failure rate, this study focuses attention on employees’ perceptions of such organizational change. For this purpose, the research incorporates a conceptual construct of attitude toward change that captures views about the need for organizational change. Structural equation analysis using LISREL provides significant support for the proposed relationships. Theoretical and practical implications are discussed along with limitations.",
"title": ""
},
{
"docid": "5585cc22a0af9cf00656ac04b14ade5a",
"text": "Side-channel attacks pose a critical threat to the deployment of secure embedded systems. Differential-power analysis is a technique relying on measuring the power consumption of device while it computes a cryptographic primitive, and extracting the secret information from it exploiting the knowledge of the operations involving the key. There is no open literature describing how to properly employ Digital Signal Processing (DSP) techniques in order to improve the effectiveness of the attacks. This paper presents a pre-processing technique based on DSP, reducing the number of traces needed to perform an attack by an order of magnitude with respect to the results obtained with raw datasets, and puts it into practical use attacking a commercial 32-bit software implementation of AES running on a Cortex-M3 CPU. The main contribution of this paper is proposing a leakage model for software implemented cryptographic primitives and an effective framework to extract it.",
"title": ""
},
{
"docid": "eb9b4bea2d1a6230f8fb9e742bb7bc23",
"text": "Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forwardand back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.",
"title": ""
},
{
"docid": "436369a1187f436290ae9b61f3e9ee1e",
"text": "In this paper we propose a sub-band energy based end-ofutterance algorithm that is capable of detecting the time instant when the user has stopped speaking. The proposed algorithm finds the time instant at which many enough sub-band spectral energy trajectories fall and stay for a pre-defined fixed time below adaptive thresholds, i.e. a non-speech period is detected after the end of the utterance. With the proposed algorithm a practical speech recognition system can give timely feedback for the user, thereby making the behaviour of the speech recognition system more predictable and similar across different usage environments and noise conditions. The proposed algorithm is shown to be more accurate and noise robust than the previously proposed approaches. Experiments with both isolated command word recognition and continuous digit recognition in various noise conditions verify the viability of the proposed approach with an average proper endof-utterance detection rate of around 94% in both cases, representing 43% error rate reduction over the most competitive previously published method.",
"title": ""
},
{
"docid": "66fd3e27e89554e4c6ea5eef294a345b",
"text": "Large-scale distributed training of deep neural networks suffer from the generalization gap caused by the increase in the effective mini-batch size. Previous approaches try to solve this problem by varying the learning rate and batch size over epochs and layers, or some ad hoc modification of the batch normalization. We propose an alternative approach using a second-order optimization method that shows similar generalization capability to first-order methods, but converges faster and can handle larger minibatches. To test our method on a benchmark where highly optimized first-order methods are available as references, we train ResNet-50 on ImageNet. We converged to 75% Top-1 validation accuracy in 35 epochs for mini-batch sizes under 16,384, and achieved 75% even with a mini-batch size of 131,072, which took 100 epochs.",
"title": ""
},
{
"docid": "cfa76eea71e41b7893c166e1952107aa",
"text": "The dynamic, ubiquitous, and often real-time interaction enabled by social media significantly changes the landscape for brand management. A deep understanding of this change is critical since it may affect a brand’s performance substantially. Literature about social media’s impact on brands is evolving, but lacks a systematic identification of key challenges related to managing brands in this new environment. This paper reviews existing research and introduces a framework of social media’s impact on brand management. It argues that consumers are becoming pivotal authors of brand stories due to new dynamic networks of consumers and brands formed through social media and the easy sharing of brand experiences in such networks. Firms need to pay attention to such consumer-generated brand stories to ensure a brand’s success in the marketplace. The authors identify key research questions related to the phenomenon and the challenges in coordinating consumerand firm-generated brand stories.",
"title": ""
},
{
"docid": "ce6d7185031f1b205181298909e8a020",
"text": "BACKGROUND\nMost preschoolers with viral wheezing exacerbations are not atopic.\n\n\nAIM\nTo test in a prospective controlled trial whether wheezing preschoolers presenting to the ED are different from the above in three different domains defining asthma: the atopic characteristics based on stringent asthma predictive index (S-API), the characteristics of bronchial hyper-responsiveness (BHR), and airway inflammation.\n\n\nMETHODS\nThe S-API was prospectively collected in 41 preschoolers (age 31.9 ± 17.4 months, range; 1-6 years) presenting to the ED with acute wheezing and compared to healthy preschoolers (n = 109) from our community (community control group). Thirty out of the 41 recruited preschoolers performed two sets of bronchial challenge tests (BCT)-(methacholine and adenosine) within 3 weeks and following 3 months of the acute event and compared to 30 consecutive ambulatory preschoolers, who performed BCT for diagnostic workup in our laboratory (ambulatory control group). On presentation, induced sputum (IS) was obtained from 22 of the 41 children.\n\n\nOUTCOMES\nPrimary: S-API, secondary: BCTs characteristics and percent eosinophils in IS.\n\n\nRESULTS\nSignificantly more wheezing preschoolers were S-API positive compared with the community control group: 20/41 (48.7%) versus 15/109 (13.7%, P < 0.001). All methacholine-BCTs-30/30 (100%) were positive compared with 13/14 (92.8%) in the ambulatory control group (P = 0.32). However, 23/27 (85.2%) were adenosine-BCT positive versus 3/17 (17.5%) in the ambulatory control group (P < 0.001). Diagnostic IS success rate was 18/22 (81.8%). Unexpectedly, 9/18 (50.0%) showed eosinophilia in the IS.\n\n\nCONCLUSIONS\nWheezing preschoolers presenting to the ED is a unique population with significantly higher rate of positive S-API and adenosine-BCT compared with controls and frequently (50%) express eosinophilic airway inflammation.",
"title": ""
},
{
"docid": "9b4ffbbcd97e94524d2598cd862a400a",
"text": "Head pose monitoring is an important task for driver assistance systems, since it is a key indicator for human attention and behavior. However, current head pose datasets either lack complexity or do not adequately represent the conditions that occur while driving. Therefore, we introduce DriveAHead, a novel dataset designed to develop and evaluate head pose monitoring algorithms in real driving conditions. We provide frame-by-frame head pose labels obtained from a motion-capture system, as well as annotations about occlusions of the driver's face. To the best of our knowledge, DriveAHead is the largest publicly available driver head pose dataset, and also the only one that provides 2D and 3D data aligned at the pixel level using the Kinect v2. Existing performance metrics are based on the mean error without any consideration of the bias towards one position or another. Here, we suggest a new performance metric, named Balanced Mean Angular Error, that addresses the bias towards the forward looking position existing in driving datasets. Finally, we present the Head Pose Network, a deep learning model that achieves better performance than current state-of-the-art algorithms, and we analyze its performance when using our dataset.",
"title": ""
}
] |
scidocsrr
|
93e6985793cc2a406cc67bf482bc54ad
|
“Positive” Results Increase Down the Hierarchy of the Sciences
|
[
{
"docid": "be183dc1e7dd57beba42ff1247c2a483",
"text": "G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.",
"title": ""
}
] |
[
{
"docid": "d46329330906d2ea997cb63cb465bec0",
"text": "We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.",
"title": ""
},
{
"docid": "612665818a7134a9ad8bfac472d021cf",
"text": "Matrix decomposition methods represent a data matrix as a product of two factor matrices: one containing basis vectors that represent meaningful concepts in the data, and another describing how the observed data can be expressed as combinations of the basis vectors. Decomposition methods have been studied extensively, but many methods return real-valued matrices. Interpreting real-valued factor matrices is hard if the original data is Boolean. In this paper, we describe a matrix decomposition formulation for Boolean data, the Discrete Basis Problem. The problem seeks for a Boolean decomposition of a binary matrix, thus allowing the user to easily interpret the basis vectors. We also describe a variation of the problem, the Discrete Basis Partitioning Problem. We show that both problems are NP-hard. For the Discrete Basis Problem, we give a simple greedy algorithm for solving it; for the Discrete Basis Partitioning Problem we show how it can be solved using existing methods. We present experimental results for the greedy algorithm and compare it against other, well known methods. Our algorithm gives intuitive basis vectors, but its reconstruction error is usually larger than with the real-valued methods. We discuss about the reasons for this behavior.",
"title": ""
},
{
"docid": "1ef1e20f24fa75b40bcc88a40a544c5b",
"text": "Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. Monitoring grid resources is a lively research area given the challenges and manifold applications. The aim of this paper is to advance the understanding of grid monitoring by introducing the involved concepts, requirements, phases, and related standardisation activities, including Global Grid Forum’s Grid Monitoring Architecture. Based on a refinement of the latter, the paper proposes a taxonomy of grid monitoring systems, which is employed to classify a wide range of projects and frameworks. The value of the offered taxonomy lies in that it captures a given system’s scope, scalability, generality and flexibility. The paper concludes with, among others, a discussion of the considered systems, as well as directions for future research. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "30b77626604d8d258ad77146e3ff7a2d",
"text": "A compact single-feed circularly-polarized (CP) wide beam microstrip antenna is proposed for CNSS application. The antenna is designed with a double-layer structure, comprising a circular patch with two rectangular stubs along the diameter direction and a parasitic ring right above it. The resonance frequency and the CP characteristics are mainly controlled by the circular patch and the rectangular stubs, respectively. The vertical HPBW (half power beam width) could be widened by the parasitic ring. Experimental results show that the measured vertical HPBW is approximately 140° and the measured out-of-roundness for the horizontal radiation pattern is only 1.1 dB. Besides, it could maintain good low-profile characteristics.",
"title": ""
},
{
"docid": "41df967b371c9e649a551706c87025a0",
"text": "Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome—a quantum state indicating which error has occurred—by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.",
"title": ""
},
{
"docid": "d5ef769538f334fe9dc3a8ac2110b7f1",
"text": "The rapid increase in the number and diversity of smart devices connected to the Internet has raised the issues of flexibility, efficiency, availability, security, and scalability within the current IoT network. These issues are caused by key mechanisms being distributed to the IoT network on a large scale, which is why a distributed secure SDN architecture for IoT using the blockchain technique (DistBlockNet) is proposed in this research. It follows the principles required for designing a secure, scalable, and efficient network architecture. The DistBlockNet model of IoT architecture combines the advantages of two emerging technologies: SDN and blockchains technology. In a verifiable manner, blockchains allow us to have a distributed peer-to-peer network where non-confident members can interact with each other without a trusted intermediary. A new scheme for updating a flow rule table using a blockchains technique is proposed to securely verify a version of the flow rule table, validate the flow rule table, and download the latest flow rules table for the IoT forwarding devices. In our proposed architecture, security must automatically adapt to the threat landscape, without administrator needs to review and apply thousands of recommendations and opinions manually. We have evaluated the performance of our proposed model architecture and compared it to the existing model with respect to various metrics. The results of our evaluation show that DistBlockNet is capable of detecting attacks in the IoT network in real time with low performance overheads and satisfying the design principles required for the future IoT network.",
"title": ""
},
{
"docid": "8051535c66ecd4a8553a7d33051b1ad4",
"text": "There are several invariant features of pointto-point human arm movements: trajectories tend to be straight, smooth, and have bell-shaped velocity profiles. One approach to accounting for these data is via optimization theory; a movement is specified implicitly as the optimum of a cost function, e.g., integrated jerk or torque change. Optimization models of trajectory planning, as well as models not phrased in the optimization framework, generally fall into two main groups-those specified in kinematic coordinates and those specified in dynamic coordinates. To distinguish between these two possibilities we have studied the effects of artificial visual feedback on planar two-joint arm movements. During self-paced point-to-point arm movements the visual feedback of hand position was altered so as to increase the perceived curvature of the movement. The perturbation was zero at both ends of the movement and reached a maximum at the midpoint of the movement. Cost functions specified by hand coordinate kinematics predict adaptation to increased curvature so as to reduce the visual curvature, while dynamically specified cost functions predict no adaptation in the underlying trajectory planner, provided the final goal of the movement can still be achieved. We also studied the effects of reducing the perceived curvature in transverse movements, which are normally slightly curved. Adaptation should be seen in this condition only if the desired trajectory is both specified in kinematic coordinates and actually curved. Increasing the perceived curvature of normally straight sagittal movements led to significant (P<0.001) corrective adaptation in the curvature of the actual hand movement; the hand movement became curved, thereby reducing the visually perceived curvature. Increasing the curvature of the normally curved transverse movements produced a significant (P<0.01) corrective adaptation; the hand movement became straighter, thereby again reducing the visually perceived curvature. When the curvature of naturally curved transverse movements was reduced, there was no significant adaptation (P>0.05). The results of the curvature-increasing study suggest that trajectories are planned in visually based kinematic coordinates. The results of the curvature-reducing study suggest that the desired trajectory is straight in visual space. These results are incompatible with purely dynamicbased models such as the minimum torque change model. We suggest that spatial perception-as mediated by vision-plays a fundamental role in trajectory planning.",
"title": ""
},
{
"docid": "b56a6fe9c9d4b45e9d15054004fac918",
"text": "Code-switching refers to the phenomena of mixing of words or phrases from foreign languages while communicating in a native language by the multilingual speakers. Codeswitching is a global phenomenon and is widely accepted in multilingual communities. However, for training the language model (LM) for such tasks, a very limited code-switched textual resources are available as yet. In this work, we present an approach to reduce the perplexity (PPL) of Hindi-English code-switched data when tested over the LM trained on purely native Hindi data. For this purpose, we propose a novel textual feature which allows the LM to predict the code-switching instances. The proposed feature is referred to as code-switching factor (CS-factor). Also, we developed a tagger that facilitates the automatic tagging of the code-switching instances. This tagger is trained on a development data and assigns an equivalent class of foreign (English) words to each of the potential native (Hindi) words. For this study, the textual resource has been created by crawling the blogs from a couple of websites educating about the usage of the Internet. In the context of recognition of the code-switching data, the proposed technique is found to yield a substantial improvement in terms of PPL.",
"title": ""
},
{
"docid": "aa3abc75e37ed6de703d05c274806220",
"text": "We conducted an extensive set of empirical analyses to examine the effect of the number of events per variable (EPV) on the relative performance of three different methods for assessing the predictive accuracy of a logistic regression model: apparent performance in the analysis sample, split-sample validation, and optimism correction using bootstrap methods. Using a single dataset of patients hospitalized with heart failure, we compared the estimates of discriminatory performance from these methods to those for a very large independent validation sample arising from the same population. As anticipated, the apparent performance was optimistically biased, with the degree of optimism diminishing as the number of events per variable increased. Differences between the bootstrap-corrected approach and the use of an independent validation sample were minimal once the number of events per variable was at least 20. Split-sample assessment resulted in too pessimistic and highly uncertain estimates of model performance. Apparent performance estimates had lower mean squared error compared to split-sample estimates, but the lowest mean squared error was obtained by bootstrap-corrected optimism estimates. For bias, variance, and mean squared error of the performance estimates, the penalty incurred by using split-sample validation was equivalent to reducing the sample size by a proportion equivalent to the proportion of the sample that was withheld for model validation. In conclusion, split-sample validation is inefficient and apparent performance is too optimistic for internal validation of regression-based prediction models. Modern validation methods, such as bootstrap-based optimism correction, are preferable. While these findings may be unsurprising to many statisticians, the results of the current study reinforce what should be considered good statistical practice in the development and validation of clinical prediction models.",
"title": ""
},
{
"docid": "7462810e07059616c0e16cc4d51a28f9",
"text": "This paper introduces the use of experiential learning during the early stages of teacher professional development. Teachers observe student outcomes from the very beginning of the process and experience new pedagogical approaches as learners themselves before adapting and implementing them in their own classrooms. This research explores the implementation of this approach with teachers in Irish second level schools who are being asked to make significant pedagogic changes as part of a major curriculum reform. Teachers’ self-reflections, observations and interviews demonstrate how the process and outcomes influenced their beliefs, resulting in meaningful changes in classroom practice. © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "dc708a73438124f69c9ac75f0f127710",
"text": "Machine learning algorithms often suffer from good generalization in testing domains especially when the training (source) and test (target) domains do not have similar distributions. To address this problem, several domain adaptation techniques have been proposed to improve the performance of the learning algorithms when they face accuracy degradation caused by the domain shift problem. In this paper, we focus on the non-homogeneous distributed target domains and propose a new latent subdomain discovery model to divide the target domain into subdomains while adapting them. It is expected that applying adaptation on subdomains increase the rate of detection in comparing with the situation that the target domain is seen as one single domain. The proposed division method considers each subdomain as a cluster which has the definite ratio of positive to negative samples, linear discriminability and conditional distribution similarity to the source domain. This method divides the target domain into subdomains while adapting the trained target classifier for each subdomain using Adapt-SVM adaptation method. It also has a simple solution for selecting the appropriate number of subdomains. We call our proposed method Cluster-based Adaptive SVM or CA-SVM in short. We test CA-SVM on two different computer vision problems, pedestrian detection and image classification. The experimental results show the advantage in accuracy rate for our approach in comparison to several baselines.",
"title": ""
},
{
"docid": "447d90ae681f7858b8873895e5e33357",
"text": "This is the final paper of a four part series on the management of worn dentition. The factors affecting the selection of restorative techniques for generalized toothwear, such as pulpal vitality, jaw relationship and occlusal guidance are discussed. The practical steps of oral rehabilitation using fixed prostheses are illustrated with two clinical cases.",
"title": ""
},
{
"docid": "9d7623afe7b3ef98f81e1de0f2f2806d",
"text": "The fashion industry faces the increasing complexity of its activities such as the globalization of the market, the proliferation of information, the reduced time to market, the increasing distance between industrial partners and pressures related to costs. Digital prototype in the textile and clothing industry enables technologies in the process of product development where various operators are involved in the different stages, with various skills and competencies, and different necessity of formalizing and defining in a deterministic way the result of their activities. Taking into account the recent trends in the industry, the product development cycle and the use of new digital technologies cannot be restricted in the “typical cycle” but additional tools and skills are required to be integrated taking into account these developments [1].",
"title": ""
},
{
"docid": "d8d8b6e722ac4dd4f33c30ae0c1a46ec",
"text": "Industrial applications for frequency modulated continuous wave (FMCW) radar systems often require the separation of close targets, which is limited by the available distance resolution of the radar. In general, for the evaluation of FMCW radar signals a fast Fourier transformation (FFT) based method is used. However, the FFT inherently limits the available distance resolution in dependence of the bandwidth. To overcome this limitation, super resolution algorithms or parametric methods can be used. A 122 GHz FMCW radar with a bandwidth of 1 GHz is used to take measurements of two targets, below the FFT limited distance resolution (150 mm) of the FMCW radar. It is shown by using an expectation maximization based algorithm that it is still possible to distinguish between both targets over the whole measurement distance. The maximum distance error remains in the millimeter range as long as the targets are separated by at least 50 mm, which equals an improved distance resolution of a factor 3 compared to the FFT. Furthermore, the influence of model order errors on the expectation maximization algorithm are analyzed.",
"title": ""
},
{
"docid": "80a34e1544f9a20d6e1698278e0479b5",
"text": "We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient descent constraint optimisation to provide further control over the generation process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and transferred as constraints to the newly generated material. The sampling process is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.",
"title": ""
},
{
"docid": "cf54284be4dbf970e286a83d3d89d08f",
"text": "The design of a wearable upper extremity therapy robot RUPERT IVtrade (Robotic Upper Extremity Repetitive Trainer) device is presented. It is designed to assist in repetitive therapy tasks related to activities of daily living which has been advocated for being more effective for functional recovery. RUPERTtrade has five actuated degrees of freedom driven by compliant and safe pneumatic muscle actuators (PMA) assisting shoulder elevation, humeral external rotation, elbow extension, forearm supination and wrist/hand extension. The device is designed to extend the arm and move in a 3D space with no gravity compensation, which is a natural setting for practicing day-to-day activities. Because the device is wearable and lightweight, the device is very portable; it can be worn standing or sitting for performing therapy tasks that better mimic activities of daily living. A closed-loop controller combining a PID-based feedback controller and a iterative learning controller (ILC)-based feedforward controller is proposed for RUPERT for passive repetitive task training. This type of control aids in overcoming the highly nonlinear nature of the plant under control, and also helps in adapting easily to different subjects for performing different tasks. The system was tested on two able-bodied subjects to evaluate its performance.",
"title": ""
},
{
"docid": "7eca03a9a5765ae0e234f74f9ef5cb4c",
"text": "In agile processes like Scrum, strong customer involvement demands for techniques to facilitate the requirements analysis and acceptance testing. Additionally, test automation is crucial, as incremental development and continuous integration require high efforts for testing. To cope with these challenges, we propose a modelbased technique for documenting customer’s requirements in forms of test models. These can be used by the developers as requirements specification and by the testers for acceptance testing. The modeling languages we use are light-weight and easy-to-learn. From the test models, we generate test scripts for FitNesse or Selenium which are well-established test automation tools in agile community.",
"title": ""
},
{
"docid": "8b3dffbb60d75f042c29a22340383453",
"text": "Welcome to the course: Gazing at Games: Using Eye Tracking to Control Virtual Characters. I will start with a short introduction of the course which will give you an idea of its aims and structure. I will also talk a bit about my background and research interests and motivate why I think this work is important.",
"title": ""
},
{
"docid": "1ccbe89f269df4e9b00ec88c920e02b4",
"text": "The configuration proposed in this paper aims to generate high voltage for pulsed power applications. The main idea is to charge two groups of capacitors in parallel through an inductor and take the advantage of resonant phenomena in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and finally the charged capacitors will be connected together in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulation models of this converter have been investigated in Matlab/SIMULINK platform and the attained results fully satisfy the proper operation of the converter.",
"title": ""
},
{
"docid": "af1b98a3b40e8adc053ddafa49e44fd0",
"text": "Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data. 1 peA and Feature Spaces Principal Component Analysis (PC A) (e.g. [3]) is an orthogonal basis transformation. The new basis is found by diagonalizing the centered covariance matrix of a data set {Xk E RNlk = 1, ... ,f}, defined by C = ((Xi (Xk))(Xi (Xk))T). The coordinates in the Eigenvector basis are called principal components. The size of an Eigenvalue >. corresponding to an Eigenvector v of C equals the amount of variance in the direction of v. Furthermore, the directions of the first n Eigenvectors corresponding to the biggest n Eigenvalues cover as much variance as possible by n orthogonal directions. In many applications they contain the most interesting information: for instance, in data compression, where we project onto the directions with biggest variance to retain as much information as possible, or in de-noising, where we deliberately drop directions with small variance. Clearly, one cannot assert that linear PCA will always detect all structure in a given data set. By the use of suitable nonlinear features, one can extract more information. Kernel PCA is very well suited to extract interesting nonlinear structures in the data [9]. The purpose of this work is therefore (i) to consider nonlinear de-noising based on Kernel PCA and (ii) to clarify the connection between feature space expansions and meaningful patterns in input space. Kernel PCA first maps the data into some feature space F via a (usually nonlinear) function <II and then performs linear PCA on the mapped data. As the feature space F might be very high dimensional (e.g. when mapping into the space of all possible d-th order monomials of input space), kernel PCA employs Mercer kernels instead of carrying Kernel peA and De-Noising in Feature Spaces 537 out the mapping <I> explicitly. A Mercer kernel is a function k(x, y) which for all data sets {Xi} gives rise to a positive matrix Kij = k(Xi' Xj) [6]. One can show that using k instead of a dot product in input space corresponds to mapping the data with some <I> to a feature space F [1], i.e. k(x,y) = (<I>(x) . <I>(y)). Kernels that have proven useful include Gaussian kernels k(x, y) = exp( -llx Yll2 Ie) and polynomial kernels k(x, y) = (x·y)d. Clearly, all algorithms that can be formulated in terms of dot products, e.g. Support Vector Machines [1], can be carried out in some feature space F without mapping the data explicitly. All these algorithms construct their solutions as expansions in the potentially infinite-dimensional feature space. The paper is organized as follows: in the next section, we briefly describe the kernel PCA algorithm. In section 3, we present an algorithm for finding approximate pre-images of expansions in feature space. Experimental results on toy and real world data are given in section 4, followed by a discussion of our findings (section 5). 2 Kernel peA and Reconstruction To perform PCA in feature space, we need to find Eigenvalues A > 0 and Eigenvectors V E F\\{O} satisfying AV = GV with G = (<I>(Xk)<I>(Xk)T).1 Substituting G into the Eigenvector equation, we note that all solutions V must lie in the span of <I>-images of the training data. This implies that we can consider the equivalent system A( <I>(Xk) . V) = (<I>(Xk) . GV) for all k = 1, ... ,f (1) and that there exist coefficients Q1 , ... ,Ql such that l V = L i=l Qi<l>(Xi) (2) Substituting C and (2) into (1), and defining an f x f matrix K by Kij := (<I>(Xi)· <I>(Xj)) = k( Xi, X j), we arrive at a problem which is cast in terms of dot products: solve",
"title": ""
}
] |
scidocsrr
|
2f8f2a38302113820d390e97ec584854
|
BiLSTM 1 BiLSTM 1 Coattention 1 Coattention 2 BiLSTM 2 BiLSTM 2 Output BiLSTM Question Document
|
[
{
"docid": "9387c02974103731846062b549022819",
"text": "Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
}
] |
[
{
"docid": "f24011de3d527f54be4dff329e3862e9",
"text": "Basic concepts of ANNs together with three most widely used ANN learning strategies (error back-propagation, Kohonen, and counterpropagation) are explained and discussed. In order to show how the explained methods can be applied to chemical problems, one simple example, the classification and the prediction of the origin of different olive oil samples, each represented by eigtht fatty acid concentrations, is worked out in detail.",
"title": ""
},
{
"docid": "60d7bd12f799b9f857c827a75fd1fb1c",
"text": "A novel run-pause-resume (RPR) debug methodology that can achieve complete cycle-level granularity of debug resolution for multiple clock domain systems is proposed. With this methodology one can pause the normal operation of a system at any cycle of any clock domain and resume the system without causing any data invalidation problem. Bidirectional transactions among different clock domains are analyzed and supported with this methodology. A debug platform with both breakpoint-setup software and clock-gating hardware is developed. The former allows the user to setup the breakpoint and calculate the exact time to transmit the pause control signal. The latter converts the pause signal to appropriate gating signals for the circuits under debug and the clock domain crossing interface. Experimental results show that the hardware area overhead is very small and 100% debug resolution is achieved. The experimented circuits include an industrial JPEG decoder system, several open-source cores and a system containing three clock domains.",
"title": ""
},
{
"docid": "eb30c6946e802086ac6de5848897a648",
"text": "To determine how age of acquisition influences perception of second-language speech, the Speech Perception in Noise (SPIN) test was administered to native Mexican-Spanish-speaking listeners who learned fluent English before age 6 (early bilinguals) or after age 14 (late bilinguals) and monolingual American-English speakers (monolinguals). Results show that the levels of noise at which the speech was intelligible were significantly higher and the benefit from context was significantly greater for monolinguals and early bilinguals than for late bilinguals. These findings indicate that learning a second language at an early age is important for the acquisition of efficient high-level processing of it, at least in the presence of noise.",
"title": ""
},
{
"docid": "fc50b185323c45e3d562d24835e99803",
"text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.",
"title": ""
},
{
"docid": "f6914702ebadddc3b8bc54fd87f1c571",
"text": "Energy crisis is one of the biggest problems in the third world developing country like Bangladesh. There is a big gap between generation and demand of Electric energy. Almost 50% population of our country is very far away from this blessings. Renewable energy is the only solution of this problem to be an energy efficient developed country. Solar energy is one of the great resources of the renewable energy which can play a crucial role in developing a power deficient country like Bangladesh. This paper provides a proposal of using dual axis solar tracker instead of solar panel. This encompasses a design of ideal solar house model using azimuth-altitude dual axis solar tracker on rooftop. It has been proved through mathematical calculation that the solar energy increases up to 50-60% where dual axis solar tracker is used. Apart from the mentioned design, this paper presents a structure and application of a microcontroller based azimuth-altitude dual axis solar tracker which tracks the solar panel according to the direction of the solar radiation. A built-in ADC converter is used to drive motor circuit. To minimize the power consumption by dual axis solar tracker, we are in favor of using two stepper motor especially during the seasonal change. The proposed model demonstrates that we require a very small amount of power from national grid if we can install dual axis solar tracker on rooftops of our residence; this is how increasing energy demand can effectively be met.",
"title": ""
},
{
"docid": "86f442763c0c9352c96f2d75e9bcf4db",
"text": "-Internet Of Things(IOT) has emerged as a trustworthy technology to improve the quality of life in smart homes through offering various automated, interactive and comfortable services. Sensors integrated at different places in homes, offices, and even in clothes, equipment, and utilities are used to sense and monitor owners’ positions, movements, required signs, valuable usage, temperature and humidity levels of rooms, etc. Along with sensing and monitoring capabilities, sensors cooperate and communicate with themselves to deliver; share and process sensed information and help real-time decision making procedures through activate suitable alerts and actions. However, ensuring privacy and providing enough security in these required services provided by IOTs is a major issue in smart home environments. In this paper, we examine the privacy and security challenges of IOTs and survey its possibilities for smart home environments. We discuss the unique characteristics that differentiate a smart environment from the rest, elaborate on security and privacy issues and their respective solution measures. A number of challenges and interesting research issues appearing from this study have been reported for further analysis.",
"title": ""
},
{
"docid": "040e5e800895e4c6f10434af973bec0f",
"text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.",
"title": ""
},
{
"docid": "50c961c8b229c7a4b31ca6a67e06112c",
"text": "The emerging three-dimensional (3D) chip architectures, with their intrinsic capability of reducing the wire length, is one of the promising solutions to mitigate the interconnect problem in modern microprocessor designs. 3D memory stacking also enables much higher memory bandwidth for future chip-multiprocessor design, mitigating the ``memory wall\" problem. In addition, heterogenous integration enabled by 3D technology can also result in innovation designs for future microprocessors. This paper serves as a survey of various approaches to design future 3D microprocessors, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.",
"title": ""
},
{
"docid": "bcb71f55375c1948283281d60ace5549",
"text": "This paper proposes a novel approach named AGM to e ciently mine the association rules among the frequently appearing substructures in a given graph data set. A graph transaction is represented by an adjacency matrix, and the frequent patterns appearing in the matrices are mined through the extended algorithm of the basket analysis. Its performance has been evaluated for the arti cial simulation data and the carcinogenesis data of Oxford University and NTP. Its high e ciency has been con rmed for the size of a real-world problem. . . .",
"title": ""
},
{
"docid": "b49e8f14c2c592e8abfed0e64f66bb5e",
"text": "Loan portfolio problems have historically been the major cause of bank losses because of inherent risk of possible loan losses (credit risk). The study of Bank Loan Fraud Detection and IT-Based Combat Strategies in Nigeria which focused on analyzing the loan assessment system was carried out purposely to overcome the challenges of high incidence of NonPerforming Loan (NPL) that are currently being experienced as a result of lack of good decision making mechanisms in disbursing loans. NPL has led to failures of some banks in the past, contributed to shareholders losing their investment in the banks and inaccessibility of bank loans to the public. Information Technology (IT) is a critical component in creating value in banking industries. It provides decision makers with an efficient means to store, calculate, and report information about risk, profitability, collateral analysis, and precedent conditions for loan. This results in a quicker response for client and efficient JIBC August 2011, Vol. 16, No.2 2 identification of appropriate risk controls to enable the financial institution realize a profit. In this paper we discussed the values of various applications of information technology in mitigating the problems of loan fraud in Nigeria financial Institutions.",
"title": ""
},
{
"docid": "284c52c29b5a5c2d3fbd0a7141353e35",
"text": "This paper presents results of patient experiments using a new gait-phase detection sensor (GPDS) together with a programmable functional electrical stimulation (FES) system for subjects with a dropped-foot walking dysfunction. The GPDS (sensors and processing unit) is entirely embedded in a shoe insole and detects in real time four phases (events) during the gait cycle: stance, heel off, swing, and heel strike. The instrumented GPDS insole consists of a miniature gyroscope that measures the angular velocity of the foot and three force sensitive resistors that measure the force load on the shoe insole at the heel and the metatarsal bones. The extracted gait-phase signal is transmitted from the embedded microcontroller to the electrical stimulator and used in a finite state control scheme to time the electrical stimulation sequences. The electrical stimulations induce muscle contractions in the paralyzed muscles leading to a more physiological motion of the affected leg. The experimental results of the quantitative motion analysis during walking of the affected and nonaffected sides showed that the use of the combined insole and FES system led to a significant improvement in the gait-kinematics of the affected leg. This combined sensor and stimulation system has the potential to serve as a walking aid for rehabilitation training or permanent use in a wide range of gait disabilities after brain stroke, spinal-cord injury, or neurological diseases.",
"title": ""
},
{
"docid": "cf460c614c64b9fb69d5d56e40f2b6ba",
"text": "Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons - Attribution - Share Alike (CC BY-SA) license.",
"title": ""
},
{
"docid": "455ad24d734b7941c4be4de78d99db9e",
"text": "This paper is concerned with simple human performance laws of action for three classes of taskspointing, crossing, and steering, as well as their applications in Virtual Reality research. In comparison to Fitts' law of pointing, the law of steering the quantitative relationship between human temporal performance and the movement path's spatial characteristicshas been notably under investigated. After a review of research on the law of steering in different domains and time periods, we examine the applicability of the law of steering in a VR locomotion task. Participants drove a virtual vehicle in a virtual environment on paths whose shape and width were systematically manipulated. Results showed that the law of steering indeed applies to locomotion in Virtual Environments. Participants' mean trial completion times linearly correlated (r2 between 0.985 and 0.999) with an index of difficulty quantified as path length to width ratio for the straight and circular paths used in this experiment. On average both the mean and the maximum speeds of the participants were linearly proportional to path width. Such human performance regularity provides a quantitative tool for 3D human-machine interface design and evaluation. We also propose to use the law-of-steering model in Virtual Reality manipulation tasks such as the ring and wire task in the future.",
"title": ""
},
{
"docid": "28bb04440e9f5d0bfe465ec9fe685eda",
"text": "Model transformations are at the heart of model driven engineering (MDE) and can be used in many different application scenarios. For instance, model transformations are used to integrate very large models. As a consequence, they are becoming more and more complex. However, these transformations are still developed manually. Several code patterns are implemented repetitively, increasing the probability of programming errors and reducing code reusability. There is not yet a complete solution that automates the development of model transformations. In this paper we propose a novel approach that uses matching transformations and weaving models to semi-automate the development of transformations. Matching transformations are a special kind of transformations that implement heuristics and algorithms to create weaving models. Weaving models are models that capture different kinds of relationships between models. Our solution enables to rapidly implement and to customize these heuristics. We combine different heuristics, and we propose a new metamodel-based heuristic that exploits metamodel data to automatically produce weaving models. The weaving models are derived into model integration transformations.",
"title": ""
},
{
"docid": "45152c21deeb74c04815abbb2d04d1fb",
"text": "Semiconductor quantum dots (QDs) feature excellent properties, such as high quantum efficiency, tunable emission frequency, and good fluorescence stability. Incorporation of QDs into new devices relies upon high-resolution and high-throughput patterning techniques. Herein, we report a new printing technique known as bubble printing (BP), which exploits a light-generated microbubble at the interface of colloidal QD solution and a substrate to directly write QDs into arbitrary patterns. With the uniform plasmonic hot spot distribution for high bubble stability and the optimum light-scanning parameters, we have achieved full-color QD printing with submicron resolution (650 nm), high throughput (scanning rate of ∼10-2 m/s), and high adhesion of the QDs to the substrates. The printing parameters can be optimized to further control the fluorescence properties of the patterned QDs, such as emission wavelength and lifetime. The patterning of QDs on flexible substrates further demonstrates the wide applicability of this new technique. Thus, BP technique addresses the barrier of achieving a widely applicable, high-throughput and user-friendly patterning technique in the submicrometer regime, along with simultaneous fluorescence modification capability.",
"title": ""
},
{
"docid": "5aee510b62d8792a38044fc8c68a57e4",
"text": "In this paper we present a novel method for jointly extracting beats and downbeats from audio signals. A recurrent neural network operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature that clearly distinguishes between beats and downbeats. A dynamic Bayesian network is then used to model bars of variable length and align the predicted beat and downbeat positions to the global best solution. We find that the proposed model achieves state-of-the-art performance on a wide range of different musical genres and styles.",
"title": ""
},
{
"docid": "23e84b0df8b0a80da2d425c28894a745",
"text": "Robust Trust Reputation Systems (TRS) provide a most trustful reputation score for a specific product or service so as to support relying parties taking the right decision while interacting with an e-commerce application. Thus, TRS must rely on an appropriate architecture and suitable algorithms that are able to improve the selection, storage, generation and classification of textual feedbacks. In this work, we propose a new architecture for TRS in e-commerce applications. In fact, we propose an intelligent layer which displays to each feedback provider, who has already given his recommendation on a product, a collection of prefabricated feedbacks related to the same product. Our main contribution in this paper is a Reputation algorithm which studies the user's attitude toward this selection of prefabricated feedbacks. As a result of this study, the reputation algorithm generates better trust degree of the user, trust degree of the feedback and a better global reputation score of the product.",
"title": ""
},
{
"docid": "0c23a922fa3826428234a711e31d7875",
"text": "Pier is the second generation of an industrial strength content management and application framework. Pier is written with objects from top to bottom and it is easily customized to accommodate new requirements. Pier is based on Magritte, a powerful meta-description framework. Pier has proven to be very powerful in the combination with Seaside, to enable easy composition and configuration of interactive web sites through a convenient web interface without having to write code.",
"title": ""
},
{
"docid": "c210e0a2ba0d8daf6935f4d825319886",
"text": "Monte Carlo integration is a powerful technique for the evaluation of difficult integrals. Applications in rendering include distribution ray tracing, Monte Carlo path tracing, and form-factor computation for radiosity methods. In these cases variance can often be significantly reduced by drawing samples from several distributions, each designed to sample well some difficult aspect of the integrand. Normally this is done by explicitly partitioning the integration domain into regions that are sampled differently. We present a powerful alternative for constructing robust Monte Carlo estimators, by combining samples from several distributions in a way that is provably good. These estimators are unbiased, and can reduce variance significantly at little additional cost. We present experiments and measurements from several areas in rendering: calculation of glossy highlights from area light sources, the “final gather” pass of some radiosity algorithms, and direct solution of the rendering equation using bidirectional path tracing. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.3.3 [Computer Graphics]: Picture/Image Generation; G.1.9 [Numerical Analysis]: Integral Equations— Fredholm equations. Additional",
"title": ""
},
{
"docid": "7cc2747afd6f7c2c2173cf9175574d74",
"text": "Craniopagus parasiticus, or épicome, is a rare teratological type, of which only six cases have been recorded in the medical literature. It differs from craniopagus conjoined twins in that the body and limbs of the parasitic twin are underdeveloped, leaving in some cases only a parasitic head, inserted on the crown of the autositic twin. The first case of this malformation was Everard Home's famous Twin-Headed Boy of Bengal, whose skull is preserved at the Hunterian Museum. In this historical review, Home's case is presented in some detail, and the later cases are used to explain further some of its particulars.",
"title": ""
}
] |
scidocsrr
|
196f32015db1f5d55351bbbc74c28c74
|
Cell phone-based biometric identification
|
[
{
"docid": "0ffdbffcd47088afe07cbb7507b20853",
"text": "This paper presents an approach on recognising individuals based on 3D acceleration data from walking, which are collected using MEMS. Unlike most other gait recognition methods, which are based on video source, our approach uses walking acceleration in three directions: vertical, backward-forward and sideways. Using gait samples from 21 individuals and applying two methods, histogram similarity and cycle length, the equal error rates of 5% and 9% are achieved, respectively.",
"title": ""
},
{
"docid": "6c4d6eff1fb7ef03efc3197726545ed8",
"text": "Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di/cult to disguise. Current approaches are mostly statistical and concentrate on walking only. By analysing leg motion we show how we can recognise people not only by the walking gait, but also by the running gait. This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts. These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg, from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means. One approach is completely automated whereas the other requires speci5cation of a single parameter to distinguish between walking and running. Results show that both gaits are potential biometrics, with running being more potent. By its basis in evidence gathering, this new technique can tolerate noise and low resolution. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "fca76468b4d72fd5ef7c85b5d56548b9",
"text": "Cloud providers, like Amazon, offer their data centers' computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users' experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.",
"title": ""
},
{
"docid": "ebe86cf94b566d7a4df045ce28055f66",
"text": "Despite numerous predictions of the paperless office, knowledge work is still characterized by the combined use of paper and digital documents. Digital pen-and-paper user interfaces bridge the gap between both worlds by electronically capturing the interactions of a user with a pen on real paper. The contribution of this paper is two-fold: First, we introduce an interaction framework for pen-and-paper user interfaces consisting of six core interactions. This helps both in analyzing existing work practices and interfaces and in guiding the design of interfaces which offer complex functionality and nevertheless remain simple to use. Second, we apply this framework and contribute three novel pen-and-paper interaction strategies for creating hyperlinks between printed and digital documents and for tagging both types of documents.",
"title": ""
},
{
"docid": "24c00b40221b905943efbda6a7d5121f",
"text": "In four experiments, this research sheds light on aesthetic experiences by rigorously investigating behavioral, neural, and psychological properties of package design. We find that aesthetic packages significantly increase the reaction time of consumers' choice responses; that they are chosen over products with well-known brands in standardized packages, despite higher prices; and that they result in increased activation in the nucleus accumbens and the ventromedial prefrontal cortex, according to functional magnetic resonance imaging (fMRI). The results suggest that reward value plays an important role in aesthetic product experiences. Further, a closer look at psychometric and neuroimaging data finds that a paper-and-pencil measure of affective product involvement correlates with aesthetic product experiences in the brain. Implications for future aesthetics research, package designers, and product managers are discussed. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "15da6453d3580a9f26ecb79f9bc8e270",
"text": "In 2005 the Commission for Africa noted that ‘Tackling HIV and AIDS requires a holistic response that recognises the wider cultural and social context’ (p. 197). Cultural factors that range from beliefs and values regarding courtship, sexual networking, contraceptive use, perspectives on sexual orientation, explanatory models for disease and misfortune and norms for gender and marital relations have all been shown to be factors in the various ways that HIV/AIDS has impacted on African societies (UNESCO, 2002). Increasingly the centrality of culture is being recognised as important to HIV/AIDS prevention, treatment, care and support. With culture having both positive and negative influences on health behaviour, international donors and policy makers are beginning to acknowledge the need for cultural approaches to the AIDS crisis (Nguyen et al., 2008). The development of cultural approaches to HIV/AIDS presents two major challenges for South Africa. First, the multi-cultural nature of the country means that there is no single sociocultural context in which the HIV/AIDS epidemic is occurring. South Africa is home to a rich tapestry of racial, ethnic, religious and linguistic groups. As a result of colonial history and more recent migration, indigenous Africans have come to live alongside large populations of people with European, Asian and mixed descent, all of whom could lay claim to distinctive cultural practices and spiritual beliefs. Whilst all South Africans are affected by the spread of HIV, the burden of the disease lies with the majority black African population (see Shisana et al., 2005; UNAIDS, 2007). Therefore, this chapter will focus on some sociocultural aspects of life within the majority black African population of South Africa, most of whom speak languages that are classified within the broad linguistic grouping of Bantu languages. This large family of linguistically related ethnic groups span across southern Africa and comprise the bulk of the African people who reside in South Africa today (Hammond-Tooke, 1974). A second challenge involves the legitimacy of the culture concept. Whilst race was used in apartheid as the rationale for discrimination, notions of culture and cultural differences were legitimised by segregating the country into various ‘homelands’. Within the homelands, the majority black South Africans could presumably",
"title": ""
},
{
"docid": "3d22f5be70237ae0ee1a0a1b52330bfa",
"text": "Tracking the user's intention throughout the course of a dialog, called dialog state tracking, is an important component of any dialog system. Most existing spoken dialog systems are designed to work in a static, well-defined domain, and are not well suited to tasks in which the domain may change or be extended over time. This paper shows how recurrent neural networks can be effectively applied to tracking in an extended domain with new slots and values not present in training data. The method is evaluated in the third Dialog State Tracking Challenge, where it significantly outperforms other approaches in the task of tracking the user's goal. A method for online unsupervised adaptation to new domains is also presented. Unsupervised adaptation is shown to be helpful in improving word-based recurrent neural networks, which work directly from the speech recognition results. Word-based dialog state tracking is attractive as it does not require engineering a spoken language understanding system for use in the new domain and it avoids the need for a general purpose intermediate semantic representation.",
"title": ""
},
{
"docid": "2e13b95d6892f6ce00c464e456a6e6a6",
"text": "The development of such system that automatically recognizes the input speech and translates in another language like Sanskrit is a challenging task. Sanskrit language is much more conjured language. The purpose of this paper is to explain a system which convert the English Speech into English text and then translate that English text into Sanskrit text and again convert that into speech. This system falls into the category of Speech-to-Speech translation. It unifies the isolated words class under the Speech Recognition type, traditional dictionary rule based machine translation approach and text to speech synthesizer. So basically it is classifies into three areas: Speech Recognition, Machine Translation and Speech Synthesis. This system matches tokens [1] from database to differentiate Subject, Object, Verb, Preposition, Adjective, and Adverb. This paper presents approach for translating well-structured English sentences into Sanskrit sentences. Since Sanskrit is free ordering language (or syntax free language) or we can say its meaning won't be change even if the order of words changes.",
"title": ""
},
{
"docid": "b54616ef0a962f7419589727cb7d276f",
"text": "Writing oracles is challenging. As a result, developers often create oracles that check too little, resulting in tests that are unable to detect failures, or check too much, resulting in tests that are brittle and difficult to maintain. In this paper we present a new technique for automatically analyzing test oracles. The technique is based on dynamic tainting and detects both brittle assertions—assertions that depend on values that are derived from uncontrolled inputs—and unused inputs—inputs provided by the test that are not checked by an assertion. We also presented OraclePolish, an implementation of the technique that can analyze tests that are written in Java and use the JUnit testing framework. Using OraclePolish, we conducted an empirical evaluation of more than 4000 real test cases. The results of the evaluation show that OraclePolish is effective; it detected 164 tests that contain brittle assertions and 1618 tests that have unused inputs. In addition, the results also demonstrate that the costs associated with using the technique are reasonable.",
"title": ""
},
{
"docid": "410a4df5b17ec0c4b160c378ca08bc17",
"text": "We present the results of an investigation into the nature of information needs of software developers who work in projects that are part of larger ecosystems. This work is based on a quantitative survey of 75 professional software developers. We corroborate the results identified in the survey with needs and motivations proposed in a previous survey and discover that tool support for developers working in an ecosystem context is even more meager than we thought: mailing lists and internet search are the most popular tools developers use to satisfy their ecosystem-related information needs.",
"title": ""
},
{
"docid": "a430a43781d7fd4e36cd393103958265",
"text": "BACKGROUND\nThis review evaluates the DSM-IV criteria of social anxiety disorder (SAD), with a focus on the generalized specifier and alternative specifiers, the considerable overlap between the DSM-IV diagnostic criteria for SAD and avoidant personality disorder, and developmental issues.\n\n\nMETHOD\nA literature review was conducted, using the validators provided by the DSM-V Spectrum Study Group. This review presents a number of options and preliminary recommendations to be considered for DSM-V.\n\n\nRESULTS/CONCLUSIONS\nLittle supporting evidence was found for the current specifier, generalized SAD. Rather, the symptoms of individuals with SAD appear to fall along a continuum of severity based on the number of fears. Available evidence suggested the utility of a specifier indicating a \"predominantly performance\" variety of SAD. A specifier based on \"fear of showing anxiety symptoms\" (e.g., blushing) was considered. However, a tendency to show anxiety symptoms is a core fear in SAD, similar to acting or appearing in a certain way. More research is needed before considering subtyping SAD based on core fears. SAD was found to be a valid diagnosis in children and adolescents. Selective mutism could be considered in part as a young child's avoidance response to social fears. Pervasive test anxiety may belong not only to SAD, but also to generalized anxiety disorder. The data are equivocal regarding whether to consider avoidant personality disorder simply a severe form of SAD. Secondary data analyses, field trials, and validity tests are needed to investigate the recommendations and options.",
"title": ""
},
{
"docid": "8615959de53d6579613e1213a53e6525",
"text": "This paper addresses the problem of frequency domain packet scheduling (FDPS) incorporating spatial division multiplexing (SDM) multiple input multiple output (MIMO) techniques on the 3GPP Long Term Evolution (LTE) downlink. We impose the LTE MIMO constraint of selecting only one MIMO mode (spatial multiplexing or transmit diversity) per user per transmission time interval (TTI). First, we address the optimal MIMO mode selection (multiplexing or diversity) per user in each TTI in order to maximize the proportional fair (PF) criterion extended to frequency and spatial domains. We prove that the SU-MIMO (single-user MIMO) FDPS problem under the LTE requirement is NP-hard and therefore, we develop two approximation algorithms (one with full channel feedback and the other with partial channel feedback) with provable performance bounds. Based on 3GPP LTE system model simulations, the approximation algorithm with partial channel feedback is shown to have comparable performance to the one with full channel feedback, while significantly reducing the channel feedback overhead by nearly 50%.",
"title": ""
},
{
"docid": "df3cad5eb68df1bc5d6770f4f700ac65",
"text": "Substrate integrated waveguide (SIW) cavity-backed antenna arrays have advantages of low-profile, high-gain and low-cost fabrication. However, traditional SIW cavity-backed antenna arrays usually load with extra feeding networks, which make the whole arrays larger and more complex. A novel 4 × 4 SIW cavity-backed antenna array without using individual feeding network is presented in this letter. The proposed antenna array consists of sixteen SIW cavities connected by inductive windows as feeding network and wide slots on the surface of each cavity as radiating part. Without loading with extra feeding network, the array is compact.",
"title": ""
},
{
"docid": "5710a8fb98304848f50a4460a4dfc53d",
"text": "There are many challenges and criticisms attached to the conduct of research, none the least of which is a notion that much of the research undertaken in professional disciplines such as nursing may not have clinical and/or practical relevance. While there are a plethora of qualitative research methods that individuals must consider when designing research studies, one method stands out Grounded Theory (GT). Grounded theory was developed in the early 1960’s by Glaser and Strauss. With its theoretical orientation based in sociology, GT strives to understand and explain human behavior through inductive reasoning processes (Elliott & Lazenbatt, 2005). Because of its emphasis on the utilization of a variety of data sources that are grounded in particular contexts, GT provides a natural theoretical fit when designing nursing research studies. In this article, the authors provide an overview of GT and then describe the appropriateness, advantages, and disadvantages of applying it as part of the research design process. Additionally, the authors highlight the importance of taking a reflexive position to stay engaged while interacting with the data, and explore how to apply GT theory to particular research questions and studies. Finally, the strengths and limitations of this method of inquiry as applied to nursing research using a brief case study approach is presented.",
"title": ""
},
{
"docid": "be9ebd1cd6f51ed22ac04d5dd9d99202",
"text": "We present a new garbled circuit construction for two-party secure function evaluation (SFE). In our one-round protocol, XOR gates are evaluated “for free”, which results in the corresponding improvement over the best garbled circuit implementations (e.g. Fairplay [19]). We build permutation networks [26] and Universal Circuits (UC) [25] almost exclusively of XOR gates; this results in a factor of up to 4 improvement (in both computation and communication) of their SFE. We also improve integer addition and equality testing by factor of up to 2. We rely on the Random Oracle (RO) assumption. Our constructions are proven secure in the semi-honest model.",
"title": ""
},
{
"docid": "b0d91cac5497879ea87bdf9034f3fd6d",
"text": "This paper presents an open-source indoor navigation system for quadrotor micro aerial vehicles(MAVs), implemented in the ROS framework. The system requires a minimal set of sensors including a planar laser range-finder and an inertial measurement unit. We address the issues of autonomous control, state estimation, path-planning, and teleoperation, and provide interfaces that allow the system to seamlessly integrate with existing ROS navigation tools for 2D SLAM and 3D mapping. All components run in real time onboard the MAV, with state estimation and control operating at 1 kHz. A major focus in our work is modularity and abstraction, allowing the system to be both flexible and hardware-independent. All the software and hardware components which we have developed, as well as documentation and test data, are available online.",
"title": ""
},
{
"docid": "9e0a28a8205120128938b52ba8321561",
"text": "Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.",
"title": ""
},
{
"docid": "505a150ad558f60a57d7f708a05288f3",
"text": "Probiotic supplements in food industry have attracted a lot of attention and shown a remarkable growth in this field. Metabolic engineering (ME) approaches enable understanding their mechanism of action and increases possibility of designing probiotic strains with desired functions. Probiotic microorganisms generally referred as industrially important lactic acid bacteria (LAB) which are involved in fermenting dairy products, food, beverages and produces lactic acid as final product. A number of illustrations of metabolic engineering approaches in industrial probiotic bacteria have been described in this review including transcriptomic studies of Lactobacillus reuteri and improvement in exopolysaccharide (EPS) biosynthesis yield in Lactobacillus casei LC2W. This review summaries various metabolic engineering approaches for exploring metabolic pathways. These approaches enable evaluation of cellular metabolic state and effective editing of microbial genome or introduction of novel enzymes to redirect the carbon fluxes. In addition, various system biology tools such as in silico design commonly used for improving strain performance is also discussed. Finally, we discuss the integration of metabolic engineering and genome profiling which offers a new way to explore metabolic interactions, fluxomics and probiogenomics using probiotic bacteria like Bifidobacterium spp and Lactobacillus spp.",
"title": ""
},
{
"docid": "a2a8f1011606de266c3b235f31f95bee",
"text": "In this paper, we look at three different methods of extracting the argumentative structure from a piece of natural language text. These methods cover linguistic features, changes in the topic being discussed and a supervised machine learning approach to identify the components of argumentation schemes, patterns of human reasoning which have been detailed extensively in philosophy and psychology. For each of these approaches we achieve results comparable to those previously reported, whilst at the same time achieving a more detailed argument structure. Finally, we use the results from these individual techniques to apply them in combination, further improving the argument structure identification.",
"title": ""
},
{
"docid": "f90e6d3084733994935fcbee64286aec",
"text": "To find the position of an acoustic source in a room, typically, a set of relative delays among different microphone pairs needs to be determined. The generalized cross-correlation (GCC) method is the most popular to do so and is well explained in a landmark paper by Knapp and Carter. In this paper, the idea of cross-correlation coefficient between two random signals is generalized to the multichannel case by using the notion of spatial prediction. The multichannel spatial correlation matrix is then deduced and its properties are discussed. We then propose a new method based on the multichannel spatial correlation matrix for time delay estimation. It is shown that this new approach can take advantage of the redundancy when more than two microphones are available and this redundancy can help the estimator to better cope with noise and reverberation.",
"title": ""
},
{
"docid": "edec01ca60d2fbdd82a419441d876b89",
"text": "The concept of school engagement has attracted increasing attention as representing a possible antidote to declining academic motivation and achievement. Engagement is presumed to be malleable, responsive to contextualfeatures, and amenable to environmental change. Researchers describe behavioral, emotional, and cognitive engagement and recommend studying engagement as a multifaceted construct. This article reviews definitions, measures, precursors, and outcomes of engagement; discusses limitations in the existing research; and suggests improvements. The authors conclude that, although much has been learned, the potential contribution of the concept of school engagement to research on student experience has yet to be realized. They callfor richer characterizations of how students behave, feel, and think-research that could aid in the development offinely tuned interventions.",
"title": ""
},
{
"docid": "258e931d5c8d94f73be41cbb0058f49b",
"text": "VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output.\n VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.",
"title": ""
}
] |
scidocsrr
|
a73f45ac26927bc7b13c38acd3d6ab51
|
GALOIS REPRESENTATIONS AND MODULAR FORMS
|
[
{
"docid": "a272d084fe7032e7f3c6df5a2e6bec8e",
"text": "In the work of one of us (A.W.) on the conjecture that all elliptic curves defined over Q are modular, the importance of knowing that certain Hecke algebras are complete intersections was established. The purpose of this article is to provide the missing ingredient in [W2] by establishing that the Hecke algebras considered there are complete intersections. As is recorded in [W2], a method going back to Mazur [M] allows one to show that these algebras are Gorenstein, but this seems to be too weak for the purposes of that paper. The methods of this paper are related to those of chapter 3 of [W2]. We would like to thank Henri Darmon, Fred Diamond and Gerd Faltings for carefully reading the first version of this article. Gerd Faltings has also suggested a simplification of our argument and we would like to thank him for allowing us to reproduce this in the appendix to this paper. R.T. would like to thank A.W. for his invitation to collaborate on these problems and for sharing his many insights into the questions considered. R.T. would also like to thank Princeton University, Université de Paris 7 and Harvard University for their hospitality during some of the work on this paper. A.W. was supported by an NSF grant.",
"title": ""
}
] |
[
{
"docid": "d415096fccd9b0f082b7202d0c9f32fe",
"text": "Penil duplikasyon(diğer adıyla Diphallia veya diphallasparatus) beş milyon da bir görülen nadir bir malformationdur. Sıklıkla anorektal, üriner ve vertebral anomalilerle birliktedir. Hastanemiz üroloji kliniğine penil şekil bozukluğu şikayetiyle başvuran 15 yaşında erkek hasta anomalinin ve eşlik edebilecek diğer patolojilerin görüntülenebilmesi amacıyla radyoloji ünitesine gönderilmişti. MR incelemede tam olmayan psödoduplikasyon ile uyumlu olan ve diğer kanal ile birleşen ikinci bir uretra distalde aksesuar glans düzeyinde künt olarak sonlanmaktaydı. Doppler USG inceleme ile korpus kavernozum ve korpus spongiozum düzeylerinde vasküler yapılar değerlendirildi. Nadir rastlanan penil duplikasyon olgumuzda yapılan MR , sonografi ve indirekt röntgen incelemelerinin sonuçlarını literatürde rastlanan az sayıda benzer olguları da gözden geçirerek sunmayı amaçladık. Abstract",
"title": ""
},
{
"docid": "e31f3642a238f0be69e1e7cd1cc95067",
"text": "In the past, several systems have been presented that enable users to view occluded points of interest using Augmented Reality X-ray visualizations. It is challenging to design a visualization that provides correct occlusions between occluder and occluded objects while maximizing legibility. We have previously published an Augmented Reality X-ray visualization that renders edges of the occluder region over the occluded region to facilitate correct occlusions while providing foreground context. While this approach is simple and works in a wide range of situations, it provides only minimal context of the occluder object.",
"title": ""
},
{
"docid": "e7bfcc9cf345ae1570f7dfddb8cf2444",
"text": "Motivated by the need to provide services to alleviate range anxiety of electric vehicles, we consider the problem of balancing charging demand across a network of charging stations. Our objective is to reduce the potential for excessively long queues to build up at some charging stations, although other charging stations are underutilized. A stochastic balancing algorithm is presented to achieve these goals. A further feature of this algorithm is that it is fully decentralized and facilitates a plug-and-play type of behavior. Using our system, the charging stations can join and leave the network without any changes to, or communication with, a centralized infrastructure. Analysis and simulations are presented to illustrate the efficacy of our algorithm.",
"title": ""
},
{
"docid": "e9223a6ef6dec79724f59f2f5214becc",
"text": "JavaScript is a powerful and flexible prototype-based scripting language that is increasingly used by developers to create interactive web applications. The language is interpreted, dynamic, weakly-typed, and has first-class functions. In addition, it interacts with other web languages such as CSS and HTML at runtime. All these characteristics make JavaScript code particularly error-prone and challenging to write and maintain. Code smells are patterns in the source code that can adversely influence program comprehension and maintainability of the program in the long term. We propose a set of 13 JavaScript code smells, collected from various developer resources. We present a JavaScript code smell detection technique called JSNOSE. Our metric-based approach combines static and dynamic analysis to detect smells in client-side code. This automated technique can help developers to spot code that could benefit from refactoring. We evaluate the smell finding capabilities of our technique through an empirical study. By analyzing 11 web applications, we investigate which smells detected by JSNOSE are more prevalent.",
"title": ""
},
{
"docid": "4e3f56861c288cca8191a11d2125ede0",
"text": "A top-hat monopole Yagi antenna is presented to produce an end-fire radiation beam. The antenna has an extremely low profile and wide operating bandwidth. It consists of a folded top-hat monopole as the driven element and four short-circuited top-hat monopoles as parasitic elements. A broad bandwidth can be achieved by adjusting the different resonances introduced by the driven and parasitic elements. A prototype operating at the UHF band (f0 = 550 MHz) is fabricated and tested. Measured results show that a fractional bandwidth (|S11| <; -10 dB) of 20.5% is obtained while the antenna height is only λ0/28 at the center frequency.",
"title": ""
},
{
"docid": "1289f47ea43ddd72fc90977b0a538d1c",
"text": "This study identifies evaluative, attitudinal, and behavioral factors that enhance or reduce the likelihood of consumers aborting intended online transactions (transaction abort likelihood). Path analyses show that risk perceptions associated with eshopping have direct influence on the transaction abort likelihood, whereas benefit perceptions do not. In addition, consumers who have favorable attitudes toward e-shopping, purchasing experiences from the Internet, and high purchasing frequencies from catalogs are less likely to abort intended transactions. The results also show that attitude toward e-shopping mediate relationships between the transaction abort likelihood and other predictors (i.e., effort saving, product offering, control in the information search, and time spent on the Internet per visit). # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2956f80e896a660dbd268f9212e6d00f",
"text": "Writing as a productive skill in EFL classes is outstandingly significant. In writing classes there needs to be an efficient relationship between the teacher and students. The teacher as the only audience in many writing classes responds to students’ writing. In the early part of the 21 century the range of technologies available for use in classes has become very diverse and the ways they are being used in classrooms all over the world might affect the outcome we expect from our classes. As the present generations of students are using new technologies, the application of these recent technologies in classes might be useful. Using technology in writing classes provides opportunities for students to hand their written work to the teacher without the need for any face-to-face interaction. This present study investigates the effect of Edmodo on EFL learners’ writing performance. A quasi-experimental design was used in this study. The participants were 40 female advanced-level students attending advanced writing classes at Irana English Institute, Razan Hamedan. The focus was on the composition writing ability. The students were randomly assigned to two groups, experimental and control. Edmodo was used in the experimental group. Mann-Whitney U test was used for data analysis; the results indicated that the use of Edmodo in writing was more effective on EFL learners’ writing performance participating in this study.",
"title": ""
},
{
"docid": "ef773a23445ba125559d1c03e9267ef8",
"text": "Understanding the complex dynamic and uncertain characteristics of organisational employees who perform authorised or unauthorised information security activities is deemed to be a very important and challenging task. This paper presents a conceptual framework for classifying and organising the characteristics of organisational subjects involved in these information security practices. Our framework expands the traditional Human Behaviour and the Social Environment perspectives used in social work by identifying how knowledge, skills and individual preferences work to influence individual and group practices with respect to information security management. The classification of concepts and characteristics in the framework arises from a review of recent literature and is underpinned by theoretical models that explain these concepts and characteristics. Further, based upon an exploratory study of three case organisations in Saudi Arabia involving extensive interviews with senior managers, department managers, IT managers, information security officers, and IT staff; this article describes observed information security practices and identifies several factors which appear to be particularly important in influencing information security behaviour. These factors include values associated with national and organisational culture and how they manifest in practice, and activities related to information security management.",
"title": ""
},
{
"docid": "45fe8a9188804b222df5f12bc9a486bc",
"text": "There is renewed interest in the application of gypsum to agricultural lands, particularly of gypsum produced during flue gas desulfurization (FGD) at coal-burning power plants. We studied the effects of land application of FGD gypsum to corn ( L.) in watersheds draining to the Great Lakes. The FGD gypsum was surface applied at 11 sites at rates of 0, 1120, 2240, and 4480 kg ha after planting to 3-m by 7.6-m field plots. Approximately 12 wk after application, penetration resistance and hydraulic conductivity were measured in situ, and samples were collected for determination of bulk density and aggregate stability. No treatment effect was detected for penetration resistance or hydraulic conductivity. A positive treatment effect was seen for bulk density at only 2 of 10 sites tested. Aggregate stability reacted similarly across all sites and was decreased with the highest application of FGD gypsum, whereas the lower rates were not different from the control. Overall, there were few beneficial effects of the FGD gypsum to soil physical properties in the year of application.",
"title": ""
},
{
"docid": "542c115a46d263ee347702cf35b6193c",
"text": "We obtain universal bounds on the energy of codes and for designs in Hamming spaces. Our bounds hold for a large class of potential functions, allow unified treatment, and can be viewed as a generalization of the Levenshtein bounds for maximal codes.",
"title": ""
},
{
"docid": "558146ea927b301b7372d0954c2a4253",
"text": "In the literature, methods for fitting superellipses to data tend to be computationally expensive due to the non-linear nature of the problem. This paper describes and tests several fitting techniques which provide different trade-offs between efficiency and accuracy. In addition, we describe various alternative error of fits (EOF) that can be applied by most superellipse fitting methods. keywords: curve, superellipse, fitting, error measure",
"title": ""
},
{
"docid": "b557a68b3c49d9f9f4cae6dc5ee88a19",
"text": "Powered lower limb exoskeletons require high-performance actuator systems, capable of producing zero to high-assistive torque and at the same time yielding to human interaction torques. Such variable impedance can be achieved by the means of compliant actuators. Because of their intrinsic compliance and high force-to-weight ratio pneumatic muscles are a viable option. However, previous pneumatic muscle powered exoskeleton designs either used them as a position source or failed to meet the high-dynamic torque requirements when using them as a torque source. This paper contributes to the improvement of pneumatic muscle-based actuator systems as a torque source for exoskeleton-type robots. The use of pleated pneumatic artificial muscles in a novel actuator system design allows for a higher torque range in a larger range of motion. Performance evaluation results are given for a 1 DOF test setup and a powered knee exoskeleton. The proposed torque controller achieves the dynamic torques required for zero to full assistance at moderate walking speeds.",
"title": ""
},
{
"docid": "3bd2bfd1c7652f8655d009c085d6ed5c",
"text": "The past decade has witnessed the boom of human-machine interactions, particularly via dialog systems. In this paper, we study the task of response generation in open-domain multi-turn dialog systems. Many research efforts have been dedicated to building intelligent dialog systems, yet few shed light on deepening or widening the chatting topics in a conversational session, which would attract users to talk more. To this end, this paper presents a novel deep scheme consisting of three channels, namely global, wide, and deep ones. The global channel encodes the complete historical information within the given context, the wide one employs an attention-based recurrent neural network model to predict the keywords that may not appear in the historical context, and the deep one trains a Multi-layer Perceptron model to select some keywords for an in-depth discussion. Thereafter, our scheme integrates the outputs of these three channels to generate desired responses. To justify our model, we conducted extensive experiments to compare our model with several state-of-the-art baselines on two datasets: one is constructed by ourselves and the other is a public benchmark dataset. Experimental results demonstrate that our model yields promising performance by widening or deepening the topics of interest.",
"title": ""
},
{
"docid": "2e3dcd4ba0dbcabb86c8716d73760028",
"text": "Power transformers are one of the most critical devices in power systems. It is responsible for voltage conversion, power distribution and transmission, and provides power services. Therefore, the normal operation of the transformer is an important guarantee for the safe, reliable, high quality and economical operation of the power system. It is necessary to minimize and reduce the occurrence of transformer failure and accident. The on-line monitoring and fault diagnosis of power equipment is not only the prerequisite for realizing the predictive maintenance of equipment, but also the key to ensure the safe operation of equipment. Although the analysis of dissolved gas in transformer oil is an important means of transformer insulation monitoring, the coexistence of two kinds of faults, such as discharge and overheat, can lead to a lower positive rate of diagnosis. In this paper, we use the basic particle swarm optimization algorithm to optimize the BP neural network DGA method, select the typical oil in the oil as a neural network input, and then use the trained particle swarm algorithm to optimize the neural network for transformer fault type diagnosis. The results show that the method has a good classification effect, which can solve the problem of difficult to distinguish the faults of the transformer when the discharge and overheat coexist. The positive rate of fault diagnosis is high.",
"title": ""
},
{
"docid": "7b99361ec595958457819fd2c4c67473",
"text": "At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. In this work, we consider how the electrical properties of humans and their attire can be used to support user differentiation on touchscreens. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing, which measures the impedance of a user to the environment (i.e., ground) across a range of AC frequencies. Different people have different bone densities and muscle mass, wear different footwear, and so on. This, in turn, yields different impedance profiles, which allows for touch events, including multitouch gestures, to be attributed to a particular user. This has many interesting implications for interactive design. We describe and evaluate our sensing approach, demonstrating that the technique has considerable promise. We also discuss limitations, how these might be overcome, and next steps.",
"title": ""
},
{
"docid": "25f73f6a65d115443ef56b8d25527adc",
"text": "Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.",
"title": ""
},
{
"docid": "7e9284054a4898a9542f7ffaf11c6c6b",
"text": "Biological brains can adapt and learn from past experience. Yet neuroevolution, i.e. automatically creating artificial neural networks (ANNs) through evolutionary algorithms, has sometimes focused on static ANNs that cannot change their weights during their lifetime. A profound problem with evolving adaptive systems is that learning to learn is highly deceptive. Because it is easier at first to improve fitness without evolving the ability to learn, evolution is likely to exploit domain-dependent static (i.e. non-adaptive) heuristics. This paper analyzes this inherent deceptiveness in a variety of different dynamic, reward-based learning tasks, and proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely. A series of experiments and an in-depth analysis show how behaviors that could potentially serve as a stepping stone to finding adaptive solutions are discovered by novelty search yet are missed by fitness-based search. The conclusion is that novelty search has the potential to foster the emergence of adaptive behavior in reward-based learning tasks, thereby opening a new direction for research in evolving plastic ANNs.",
"title": ""
},
{
"docid": "d3156f87367e8f55c3e62d376352d727",
"text": "The topic of deep-learning has recently received considerable attention in the machine learning research community, having great potential to liberate computer scientists from hand-engineering training datasets, because the method can learn the desired features automatically. This is particularly beneficial in medical research applications of machine learning, where getting good hand labelling of data is especially expensive. We propose application of a single-layer sparse-auto encoder to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for fully automatic classification of tissue types in a large unlabelled dataset with minimal human interference -- in a manner similar to data-mining. DCE-MRI analysis, looking at the change of the MR contrast-agent concentration over successively acquired images, is time-series analysis. We analyse the change of brightness (which is related to the contrast-agent concentration) of the DCE-MRI images over time to classify different tissue types in the images. Therefore our system is an application of an auto encoder to time-series analysis while the demonstrated result and further possible successive application areas are in computer vision. We discuss the important factors affecting performance of the system in applying the auto encoder to the time-series analysis of DCE-MRI medical image data.",
"title": ""
},
{
"docid": "5bd90edcb23a828a8e1e73a4dbff1bd4",
"text": "The purpose of this study was to examine the effect of silver nanoparticles (AgNPs) produced using the high-voltage arc discharge method on the growth and metabolism of common wheat seedlings. Additionally, a simultaneous assessment of the AgNP-induced reduction in seedling infection by Fusarium culmorum (Fc) was performed. AgNP- and Fc-treated seedlings indicated that both factors considerably inhibited their growth. A significant Fc-induced reduction in seedling blight was observed following treatment with AgNPs; however, treatment with nanoparticles was also accompanied by a serious disintegration of the cell membranes of roots. Moreover, treatment with AgNPs increased the quantum efficiency of energy trapping in the PSII reaction centre (Fv/Fm) with a simultaneous decrease in energy dissipation in the form of heat. Induction of photosynthesis in the presence of AgNPs did not affect height but was reflected in higher total dry weight. Moreover, analysis of antioxidant enzyme activity typical for the stress response indicated the toxicity of AgNPs treatment compared to Fc treatment. Seedlings exposed to AgNP activity demonstrated accumulation of Ag in roots and its translocation to aerial parts, while the pathogen reduced both accumulation and translocation of this element.",
"title": ""
},
{
"docid": "030b25a7c93ca38dec71b301843c7366",
"text": "Simple grippers with one or two degrees of freedom are commercially available prosthetic hands; these pinch type devices cannot grasp small cylinders and spheres because of their small degree of freedom. This paper presents the design and prototyping of underactuated five-finger prosthetic hand for grasping various objects in daily life. Underactuated mechanism enables the prosthetic hand to move fifteen compliant joints only by one ultrasonic motor. The innovative design of this prosthetic hand is the underactuated mechanism optimized to distribute grasping force like those of humans who can grasp various objects robustly. Thanks to human like force distribution, the prototype of prosthetic hand could grasp various objects in daily life and heavy objects with the maximum ejection force of 50 N that is greater than other underactuated prosthetic hands.",
"title": ""
}
] |
scidocsrr
|
7f47db24c936ddf77b0f9e5104c136b4
|
A Multi-stream Bi-directional Recurrent Neural Network for Fine-Grained Action Detection
|
[
{
"docid": "695af0109c538ca04acff8600d6604d4",
"text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"title": ""
}
] |
[
{
"docid": "e8cf458c60dc7b4a8f71df2fabf1558d",
"text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.",
"title": ""
},
{
"docid": "4816d3c4ca52f2ba592b29636b4a3c35",
"text": "In this paper, we describe a system that applies maximum entropy (ME) models to the task of named entity recognition (NER). Starting with an annotated corpus and a set of features which are easily obtainable for almost any language, we first build a baseline NE recognizer which is then used to extract the named entities and their context information from additional nonannotated data. In turn, these lists are incorporated into the final recognizer to further improve the recognition accuracy.",
"title": ""
},
{
"docid": "4e67d4c9fb2b95bcb40aa7a2d34cbdf2",
"text": "Currently, multiple data vendors utilize the cloud-computing paradigm for trading raw data, associated analytical services, and analytic results as a commodity good. We observe that these vendors often move the functionality of data warehouses to cloud-based platforms. On such platforms, vendors provide services for integrating and analyzing data from public and commercial data sources. We present insights from interviews with seven established vendors about their key challenges with regard to pricing strategies in different market situations and derive associated research problems for the business intelligence community.",
"title": ""
},
{
"docid": "60d6869cadebea71ef549bb2a7d7e5c3",
"text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.",
"title": ""
},
{
"docid": "23a0b1d0873c456390535beb406b8532",
"text": "The deficit-model of science communication assumes increased communication about science issues will move public opinion toward the scientific consensus. However, in the case of climate change, public polarization about the issue has increased in recent years, not diminished. In this study, we draw from theories of motivated reasoning, social identity, and persuasion to examine how science-based messages may increase public polarization on controversial science issues such as climate change. Exposing 240 adults to simulated news stories about possible climate change health impacts on different groups, we found the influence of identification with potential victims was contingent on participants’ political partisanship. This partisanship increased the degree of political polarization on support for climate mitigation policies and resulted in a boomerang effect among Republican participants. Implications for understanding the role of motivated reasoning within the context of science communication are discussed.",
"title": ""
},
{
"docid": "21d7a386e974fd4b456f6d35b9ac8d57",
"text": "The efficacy of academic-mind-set interventions has been demonstrated by small-scale, proof-of-concept interventions, generally delivered in person in one school at a time. Whether this approach could be a practical way to raise school achievement on a large scale remains unknown. We therefore delivered brief growth-mind-set and sense-of-purpose interventions through online modules to 1,594 students in 13 geographically diverse high schools. Both interventions were intended to help students persist when they experienced academic difficulty; thus, both were predicted to be most beneficial for poorly performing students. This was the case. Among students at risk of dropping out of high school (one third of the sample), each intervention raised students' semester grade point averages in core academic courses and increased the rate at which students performed satisfactorily in core courses by 6.4 percentage points. We discuss implications for the pipeline from theory to practice and for education reform.",
"title": ""
},
{
"docid": "fe5377214840549fbbb6ad520592191d",
"text": "The ability to exert an appropriate amount of force on brain tissue during surgery is an important component of instrument handling. It allows surgeons to achieve the surgical objective effectively while maintaining a safe level of force in tool-tissue interaction. At the present time, this knowledge, and hence skill, is acquired through experience and is qualitatively conveyed from an expert surgeon to trainees. These forces can be assessed quantitatively by retrofitting surgical tools with sensors, thus providing a mechanism for improved performance and safety of surgery, and enhanced surgical training. This paper presents the development of a force-sensing bipolar forceps, with installation of a sensory system, that is able to measure and record interaction forces between the forceps tips and brain tissue in real time. This research is an extension of a previous research where a bipolar forceps was instrumented to measure dissection and coagulation forces applied in a single direction. Here, a planar forceps with two sets of strain gauges in two orthogonal directions was developed to enable measuring the forces with a higher accuracy. Implementation of two strain gauges allowed compensation of strain values due to deformations of the forceps in other directions (axial stiffening) and provided more accurate forces during microsurgery. An experienced neurosurgeon performed five neurosurgical tasks using the axial setup and repeated the same tasks using the planar device. The experiments were performed on cadaveric brains. Both setups were shown to be capable of measuring real-time interaction forces. Comparing the two setups, under the same experimental condition, indicated that the peak and mean forces quantified by planar forceps were at least 7% and 10% less than those of axial tool, respectively; therefore, utilizing readings of all strain gauges in planar forceps provides more accurate values of both peak and mean forces than axial forceps. Cross-correlation analysis between the two force signals obtained, one from each cadaveric practice, showed a high similarity between the two force signals.",
"title": ""
},
{
"docid": "42bad17aa74d4dc972b48f054656de48",
"text": "We present a method for learning image representations using a two-layer sparse coding scheme at the pixel level. The first layer encodes local patches of an image. After pooling within local regions, the first layer codes are then passed to the second layer, which jointly encodes signals from the region. Unlike traditional sparse coding methods that encode local patches independently, this approach accounts for high-order dependency among patterns in a local image neighborhood. We develop algorithms for data encoding and codebook learning, and show in experiments that the method leads to more invariant and discriminative image representations. The algorithm gives excellent results for hand-written digit recognition on MNIST and object recognition on the Caltech101 benchmark. This marks the first time that such accuracies have been achieved using automatically learned features from the pixel level, rather than using hand-designed descriptors.",
"title": ""
},
{
"docid": "c22d8851b3c2dd228d13f940d4e7cebe",
"text": "This paper explores the use of cognitive mapping for eliciting users' sensemaking during information system (IS) appropriation. Despite the potential usefulness of sensemaking, few studies in IS research use it as a theoretical lens to address IS appropriation. A possible reason for this may be that sensemaking does not easily lend itself to be used in practice. We introduce cognitive mapping as a way to elicit users' sensemaking and illustrate its value by reporting on findings from an empirical study of the introduction of an Electronic Patient Record (EPR) system. The contribution of the paper is threefold: first, our findings demonstrate cognitive mapping's use for eliciting users' sensemaking during IS appropriation. Second, our findings illustrate how cognitive mapping can be used as a dynamic approach facilitating collective negotiation of meaning. Third, we contribute with a thorough discussion of the epistemological and methodological assumptions underlying cognitive mapping to ensure its validity and trustworthiness.",
"title": ""
},
{
"docid": "822b3d69fd4c55f45a30ff866c78c2b1",
"text": "Orthogonal frequency-division multiplexing (OFDM) modulation is a promising technique for achieving the high bit rates required for a wireless multimedia service. Without channel estimation and tracking, OFDM systems have to use differential phase-shift keying (DPSK), which has a 3-dB signalto-noise ratio (SNR) loss compared with coherent phase-shift keying (PSK). To improve the performance of OFDM systems by using coherent PSK, we investigate robust channel estimation for OFDM systems. We derive a minimum mean-square-error (MMSE) channel estimator, which makes full use of the timeand frequency-domain correlations of the frequency response of time-varying dispersive fading channels. Since the channel statistics are usually unknown, we also analyze the mismatch of the estimator-to-channel statistics and propose a robust channel estimator that is insensitive to the channel statistics. The robust channel estimator can significantly improve the performance of OFDM systems in a rapid dispersive fading channel.",
"title": ""
},
{
"docid": "b6707e5553e23e1a7786230217e81d6a",
"text": "Service robots have to robustly follow and interact with humans. In this paper, we propose a very fast multi-people tracking algorithm designed to be applied on mobile service robots. Our approach exploits RGB-D data and can run in real-time at very high frame rate on a standard laptop without the need for a GPU implementation. It also features a novel depthbased sub-clustering method which allows to detect people within groups or even standing near walls. Moreover, for limiting drifts and track ID switches, an online learning appearance classifier is proposed featuring a three-term joint likelihood. We compared the performances of our system with a number of state-of-the-art tracking algorithms on two public datasets acquired with three static Kinects and a moving stereo pair, respectively. In order to validate the 3D accuracy of our system, we created a new dataset in which RGB-D data are acquired by a moving robot. We made publicly available this dataset which is not only annotated by hand, but the ground-truth position of people and robot are acquired with a motion capture system in order to evaluate tracking accuracy and precision in 3D coordinates. Results of experiments on these datasets are presented, showing that, even without the need for a GPU, our approach achieves state-of-the-art accuracy and superior speed. Matteo Munaro Via Gradenigo 6A, 35131 Padova, Italy Tel.: +39-049-8277831 E-mail: matteo.munaro@dei.unipd.it Emanuele Menegatti Via Gradenigo 6A, 35131 Padova, Italy Tel.: +39-049-8277651 E-mail: emg@dei.unipd.it (a) (b) Fig. 1 Example of our system output: (a) a 3D bounding box is drawn for every tracked person on the RGB image, (b) the corresponding 3D point cloud is reported, together with the estimated people trajectories.",
"title": ""
},
{
"docid": "d35fea5ca09cb0e5bb0af43d0f1931d5",
"text": "Change your habit to hang or waste the time to only chat with your friends. It is done by your everyday, don't you feel bored? Now, we will show you the new habit that, actually it's a very old habit to do that can make your life more qualified. When feeling bored of always chatting with your friends all free time, you can find the book enPDF encyclopedia of knowledge management second edition and then read it.",
"title": ""
},
{
"docid": "c58e4246640a5c6f8f5c27c758d343ff",
"text": "Data processing computers may soon be eclipsed by a next generation of brain-like learning machines based on the \"Autosophy\" information theory. This will have a profound impact on communication and computing applications. Data processing computers are essentially adding or calculating machines that cannot find \"meaning\" as our own brains obviously can. No matter the speed of computation or the complexity of the software, computers will not evolve into brain-like machines. All that can be achieved are mere simulations. The basic problem can be traced back to an outdated (Shannon) information theory that treats all data items (such as ASCII characters or pixels) as \"quantities\" in meaningless bit streams. In 1974 Klaus Holtz developed a new Autosophy information theory, which treats all data items as \"addresses.\" The original Autosophy research explains the functioning of self-assembling natural structures, such as chemical crystals or living trees. The same natural laws and principles can also produce self-assembling data structures, which grow like data crystals or data trees in electronic memories, without computing or programming. Replacing the programmed data processing computer with brain-like, self-learning, failure-proof \"autosopher\" promises a true paradigm shift in technology, resulting in system architectures with true \"learning\" and eventually true Artificial Intelligence.",
"title": ""
},
{
"docid": "28b796954834230a0e8218e24bab0d35",
"text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).",
"title": ""
},
{
"docid": "25eaedc777c67911097827df62336dc7",
"text": "The Millennium Development Goals (MDGs) mark a historic and eff ective method of global mobilisation to achieve a set of important social priorities worldwide. They express widespread public concern about poverty, hunger, disease, unmet schooling, gender inequality, and environmental degradation. By packaging these priorities into an easily understandable set of eight goals, and by establishing measurable and timebound objectives, the MDGs help to promote global awareness, political accountability, improved metrics, social feedback, and public pressures. As described by Bill Gates, the MDGs have become a type of global report card for the fi ght against poverty for the 15 years from 2000 to 2015. As with most report cards, they generate incentives to improve performance, even if not quite enough incentives for both rich and poor countries to produce a global class of straight-A students. Developing countries have made substantial progress towards achievement of the MDGs, although the progress is highly variable across goals, countries, and regions. Mainly because of startling economic growth in China, developing countries as a whole have cut the poverty rate by half between 1990 and 2010. Some countries will achieve all or most of the MDGs, whereas others will achieve very few. By 2015, most countries will have made meaningful progress towards most of the goals. Moreover, for more than a decade, the MDGs have remained a focus of global policy debates and national policy planning. They have become incorporated into the work of non-governmental organisations and civil society more generally, and are taught to students at all levels of education. The probable shortfall in achievement of the MDGs is indeed serious, regrettable, and deeply painful for people with low income. The shortfall represents a set of operational failures that implicate many stakeholders, in both poor and rich countries. Promises of offi cial development assistance by rich countries, for example, have not been kept. Nonetheless, there is widespread feeling among policy makers and civil society that progress against poverty, hunger, and disease is notable; that the MDGs have played an important part in securing that progress; and that globally agreed goals to fi ght poverty should continue beyond 2015. In a world already undergoing dangerous climate change and other serious environmental ills, there is also widespread understanding that worldwide environmental objectives need a higher profi le alongside the poverty-reduction objectives. For these reasons, the world’s governments seem poised to adopt a new round of global goals to follow the 15 year MDG period. UN Secretary-General Ban Ki-Moon’s high-level global sustainability panel, appointed in the lead-up to the Rio+20 summit in June, 2012, has issued a report recommending that the world adopt a set of Sustainable Development Goals (SDGs). This spring, Secretary-General Ban indicated that after the Rio+20 summit he plans to appoint a high-level panel to consider the details of post-2015 goals, with UK Prime Minister David Cameron, Indonesian President Susilo Bambang Yudhoyono, and Liberian President Ellen Johnson Sirleaf as co-chairs. One scenario is that the Rio+20 summit will endorse the idea of the SDGs, and world leaders will adopt them at a special session of the UN General Assembly to review the MDGs in September, 2013. The SDGs are an important idea, and could help fi nally to move the world to a sustainable trajectory. The detailed content of the SDGs, if indeed they do emerge in upcoming diplomatic processes, is very much up for discussion and debate. Their content, I believe, should focus on two considerations: global priorities that need active worldwide public participation, political focus, and quantitative measurement; and lessons from the MDGs, especially the reasons for their successes, and corrections of some of their most important shortcomings. I have served Secretaries-General Kofi Annan and Ban Ki-Moon as Special Advisor on the MDGs, and look forward to contributing to the SDGs as well. The following suggestions, which I make solely in my personal capacity, include priorities for the SDGs and the best ways to build on the MDG successes and lessons.",
"title": ""
},
{
"docid": "561d8a130051ef2da6ad962eed110821",
"text": "In The Great Gatsby, Fitzgerald depicts the conflicts and contradictions between men and women about society, family, love, and money, literally mirroring the patriarchal society constantly challenged by feminism in the 1920s of America. This paper intends to compare the features of masculinism and feminism in three aspects: gender, society, and morality. Different identifications of gender role between men and women lead to female protests against male superiority and pursuits of individual liberation. Meanwhile, male unshaken egotism and gradually expanded individualism of women enable them both in lack of sound moral standards. But compared with the female, male moral pride drives them with much more proper moral judge, which reflects Fitzgerald’s support of the masculine society. Probing into the confrontation between masculinism and feminism, it is beneficial for further study on how to achieve equal coexistence and harmony between men and women.",
"title": ""
},
{
"docid": "a1a8dc4d3c1c0d2d76e0f1cd0cb039d2",
"text": "73 generalized vertex median of a weighted graph, \" Operations Res., pp. 955-961, July 1967. and 1973, respectively. He spent two and a half years at Bell Laboratories , Murray Hill, NJ, developing telemetrized automatic surveillance and control systems. He is now Manager at Data Communications Systems, Vienna, VA, where he has major responsibilities in research and development of network analysis and design capabilities, and has applied these capabilities in the direction of projects ranging from feasability analysis and design of front end processors for the Navy to development of network architectures for the FAA. NY, responsible for contributing to the ongoing research in the areas of large network design, topological optimization for terminal access, the concentrator location problem, and flow and congestion control strategies for packet switching networks. At present, Absfruct-An algorithm is defined for establishing routing tables in the individual nodes of a data network. The routing fable at a node i specifies, for each other node j , what fraction of the traffic destined far node j should leave node i on each of the links emanating from node i. The algorithm is applied independently at each node and successively updates the routing table at that node based on information communicated between adjacent nodes about the marginal delay to each destination. For stationary input traffic statistics, the average delay per message through the network converges, with successive updates of the routing tables, to the minimum average delay over all routing assignments. The algorithm has the additional property that the traffic to each destination is guaranteed to be loop free at each iteration of the algorithm. In addition, a new global convergence theorem for non-continuous iteration algorithms is developed. INTRODUCTION T HE problem of routing assignments has been one of the most intensively studied areas in the field of data networks in recent years. These routing problems can be roughly classified as static routing, quasi-static routing, and dynamic routing. Static routing can be typified by the following type of problem. One wishes to establish a new data network and makes various assumptions about the node locations, the link locations, and the capacities of the links. Given the traffic between each source and destination, one can calculate the traffic on each link as a function of the routing of the traffic. If one approximates the queueing delays on each link as a function of the link traffic, one can …",
"title": ""
},
{
"docid": "1b0fe1e4a33fbd9fab88dd53ca121f99",
"text": "C. P. Singh, SS Kulkarni, S.C. Rana, Kapil Deo, Scientists, Avionics Group, ARDE, Pune (DRDO), India Email-erchandravishwa02@gmail.com Abstract: In this paper, an efficient simulation model for Fuzzy logic controlled Brushless DC Motor drives using Matlab / Simulink is presented. Here Model of BLDC motor is based on State-space and its speed loop control is based on Fuzzy logic PID controller. A three phase inverter model is implemented for Motor commutation in six steps and PWM based motor Current Control with help of three Hall Sensors which are placed at 120 electrical degrees apart around the motor shaft. Dynamic performances (i.e. Torque and Speed) and currents and voltages of the inverter components are analyzed for this model. The modeling, simulation and two loop control of BLDC have been done in MATLAB\\SIMULINK software.",
"title": ""
},
{
"docid": "dbb21f81126dd049a569b26596151409",
"text": "A flexible statistical framework is developed for the analysis of read counts from RNA-Seq gene expression studies. It provides the ability to analyse complex experiments involving multiple treatment conditions and blocking variables while still taking full account of biological variation. Biological variation between RNA samples is estimated separately from the technical variation associated with sequencing technologies. Novel empirical Bayes methods allow each gene to have its own specific variability, even when there are relatively few biological replicates from which to estimate such variability. The pipeline is implemented in the edgeR package of the Bioconductor project. A case study analysis of carcinoma data demonstrates the ability of generalized linear model methods (GLMs) to detect differential expression in a paired design, and even to detect tumour-specific expression changes. The case study demonstrates the need to allow for gene-specific variability, rather than assuming a common dispersion across genes or a fixed relationship between abundance and variability. Genewise dispersions de-prioritize genes with inconsistent results and allow the main analysis to focus on changes that are consistent between biological replicates. Parallel computational approaches are developed to make non-linear model fitting faster and more reliable, making the application of GLMs to genomic data more convenient and practical. Simulations demonstrate the ability of adjusted profile likelihood estimators to return accurate estimators of biological variability in complex situations. When variation is gene-specific, empirical Bayes estimators provide an advantageous compromise between the extremes of assuming common dispersion or separate genewise dispersion. The methods developed here can also be applied to count data arising from DNA-Seq applications, including ChIP-Seq for epigenetic marks and DNA methylation analyses.",
"title": ""
},
{
"docid": "c64751968597299dc5622f589742c37d",
"text": "OpenFlow switching and Network Operating System (NOX) have been proposed to support new conceptual networking trials for fine-grained control and visibility. The OpenFlow is expected to provide multi-layer networking with switching capability of Ethernet, MPLS, and IP routing. NOX provides logically centralized access to high-level network abstraction and exerts control over the network by installing flow entries in OpenFlow compatible switches. The NOX, however, is missing the necessary functions for QoS-guaranteed software defined networking (SDN) service provisioning on carrier grade provider Internet, such as QoS-aware virtual network embedding, end-to-end network QoS assessment, and collaborations among control elements in other domain network. In this paper, we propose a QoS-aware Network Operating System (QNOX) for SDN with Generalized OpenFlows. The functional modules and operations of QNOX for QoS-aware SDN service provisioning with the major components (e.g., service element (SE), control element (CE), management element (ME), and cognitive knowledge element (CKE)) are explained in detail. The current status of prototype implementation and performances are explained. The scalability of the QNOX is also analyzed to confirm that the proposed framework can be applied for carrier grade large scale provider Internet1.",
"title": ""
}
] |
scidocsrr
|
ba9e5fe84679e806b71336e6d2e52040
|
A survey on attack detection on cloud using supervised learning techniques
|
[
{
"docid": "338a8efaaf4a790b508705f1f88872b2",
"text": "During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of fuzzy set theory, especially in the realm of industrial processes, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relations. Fuzzy control is based on fuzzy logic-a logical system that is much closer in spirit to human thinking and natural language than traditional logical systems. The fuzzy logic controller (FLC) based on fuzzy logic provides a means of converting a linguistic control strategy based on expert knowledge into an automatic control strategy. A survey of the FLC is presented ; a general methodology for constructing an FLC and assessing its performance is described; and problems that need further research are pointed out. In particular, the exposition includes a discussion of fuzzification and defuzzification strategies, the derivation of the database and fuzzy control rules, the definition of fuzzy implication, and an analysis of fuzzy reasoning mechanisms. A may be regarded as a means of emulating a skilled human operator. More generally, the use of an FLC may be viewed as still another step in the direction of model-ing human decisionmaking within the conceptual framework of fuzzy logic and approximate reasoning. In this context, the forward data-driven inference (generalized modus ponens) plays an especially important role. In what follows, we shall investigate fuzzy implication functions, the sentence connectives and and also, compositional operators, inference mechanisms, and other concepts that are closely related to the decisionmaking logic of an FLC. In general, a fuzzy control rule is a fuzzy relation which is expressed as a fuzzy implication. In fuzzy logic, there are many ways in which a fuzzy implication may be defined. The definition of a fuzzy implication may be expressed as a fuzzy implication function. The choice of a fuzzy implication function reflects not only the intuitive criteria for implication but also the effect of connective also. I) Basic Properties of a Fuuy Implication Function: The choice of a fuzzy implication function involves a number of criteria, which are discussed in considered the following basic characteristics of a fuzzy implication function: fundamental property, smoothness property, unrestricted inference, symmetry of generalized modus ponens and generalized modus tollens, and a measure of propagation of fuzziness. All of these properties are justified on purely intuitive grounds. We prefer to say …",
"title": ""
}
] |
[
{
"docid": "639ef3a979e916a6e38b32243235b73a",
"text": "Little is known about the specific kinds of questions programmers ask when evolving a code base and how well existing tools support those questions. To better support the activity of programming, answers are needed to three broad research questions: 1) What does a programmer need to know about a code base when evolving a software system? 2) How does a programmer go about finding that information? 3) How well do existing tools support programmers in answering those questions? We undertook two qualitative studies of programmers performing change tasks to provide answers to these questions. In this paper, we report on an analysis of the data from these two user studies. This paper makes three key contributions. The first contribution is a catalog of 44 types of questions programmers ask during software evolution tasks. The second contribution is a description of the observed behavior around answering those questions. The third contribution is a description of how existing deployed and proposed tools do, and do not, support answering programmers' questions.",
"title": ""
},
{
"docid": "e1e836fe6ff690f9c85443d26a1448e3",
"text": "■ We describe an apparatus and methodology to support real-time color imaging for night operations. Registered imagery obtained in the visible through nearinfrared band is combined with thermal infrared imagery by using principles of biological opponent-color vision. Visible imagery is obtained with a Gen III image intensifier tube fiber-optically coupled to a conventional charge-coupled device (CCD), and thermal infrared imagery is obtained by using an uncooled thermal imaging array. The two fields of view are matched and imaged through a dichroic beam splitter to produce realistic color renderings of a variety of night scenes. We also demonstrate grayscale and color fusion of intensified-CCD/FLIR imagery. Progress in the development of a low-light-sensitive visible CCD imager with high resolution and wide intrascene dynamic range, operating at thirty frames per second, is described. Example low-light CCD imagery obtained under controlled illumination conditions, from full moon down to overcast starlight, processed by our adaptive dynamic-range algorithm, is shown. The combination of a low-light visible CCD imager and a thermal infrared microbolometer array in a single dualband imager, with a portable image-processing computer implementing our neuralnet algorithms, and color liquid-crystal display, yields a compact integrated version of our system as a solid-state color night-vision device. The systems described here can be applied to a large variety of military operations and civilian needs.",
"title": ""
},
{
"docid": "a26dd0133a66a8868d84ef418bcaf9f5",
"text": "In performance display advertising a key metric of a campaign effectiveness is its conversion rate -- the proportion of users who take a predefined action on the advertiser website, such as a purchase. Predicting this conversion rate is thus essential for estimating the value of an impression and can be achieved via machine learning. One difficulty however is that the conversions can take place long after the impression -- up to a month -- and this delayed feedback hinders the conversion modeling. We tackle this issue by introducing an additional model that captures the conversion delay. Intuitively, this probabilistic model helps determining whether a user that has not converted should be treated as a negative sample -- when the elapsed time is larger than the predicted delay -- or should be discarded from the training set -- when it is too early to tell. We provide experimental results on real traffic logs that demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "9d65f4a5fe018e78e866fa0a537f2b6d",
"text": "BACKGROUND\nThere are many factors that affect college academic achievement among health sciences students.\n\n\nAIM\nThe aim of this study was to examine selected psychological, cognitive, and personal variables that affect students' academic achievement among health sciences college students in Saudi Arabia.\n\n\nMETHOD\nA correlational descriptive cross-sectional design was employed to collect data on the studied variables from 510 health sciences students (Medicine, Nursing, Respiratory Therapy, and Pharmacy Doctor) employing self-administered questionnaire.\n\n\nRESULTS\nResults showed that students experienced low level of self-esteem and low level of student-faculty interaction; and high level of achievement motivation and satisfaction with life. Also, they reported mild levels of depression and stress and a moderate level of anxiety. Female students reported higher level of achievement motivation, depression, anxiety, and stress; while male students reported a higher level of self-esteem. Results also showed that achievement motivation, mothers' educational level, working besides studying, gender, aptitude test score, and depression level were the best predictors of academic achievement and accounting for 43% of the total variance.\n\n\nCONCLUSIONS\nSeveral psychological, cognitive, and personal variables were found to affect college academic achievement among health sciences students. Recommendations and implications to enhance students' academic achievement are discussed.",
"title": ""
},
{
"docid": "56b42c551ad57c82ad15e6fc2e98f528",
"text": "Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward",
"title": ""
},
{
"docid": "b584a0e8f8d15ad2b4db6ace48d589ef",
"text": "In recent years, IT project failures have received a great deal of attention in the press as well as the boardroom. In an attempt to avoid disasters going forward, many organizations are now learning from the past by conducting retrospectives—that is, project postmortems or post-implementation reviews. While each individual retrospective tells a unique story and contributes to organizational learning, even more insight can be gained by examining multiple retrospectives across a variety of organizations over time. This research aggregates the knowledge gained from 99 retrospectives conducted in 74 organizations over the past seven years. It uses the findings to reveal the most common mistakes and suggest best practices for more effective project management.2",
"title": ""
},
{
"docid": "ade9e951442251e01da80208ec572a9a",
"text": "Transformable polyhedral surfaces with rigid facets, i.e., rigid origami, are useful for designing kinetic and deployable structures. In order to apply rigid origami to various architectural and other engineering design purposes, it is essential to consider the geometry of origami in kinetic motion and provide sufficiently generalized methods to produce controlled variations of shapes that suit the given design conditions. In this paper, we introduce the author’s recent studies and their extensions on the geometry of rigid origami for designing transformable and deployable structures.",
"title": ""
},
{
"docid": "c63465c12bbf8474293c839f9ad73307",
"text": "Maintaining the balance or stability of legged robots in natural terrains is a challenging problem. Besides the inherent unstable characteristics of legged robots, the sources of instability are the irregularities of the ground surface and also the external pushes. In this paper, a push recovery framework for restoring the robot balance against external unknown disturbances will be demonstrated. It is assumed that the magnitude of exerted pushes is not large enough to use a reactive stepping strategy. In the comparison with previous methods, which a simplified model such as point mass model is used as the model of the robot for studying the push recovery problem, the whole body dynamic model will be utilized in present work. This enhances the capability of the robot to exploit all of the DOFs to recover its balance. To do so, an explicit dynamic model of a quadruped robot will be derived. The balance controller is based on the computation of the appropriate acceleration of the main body. It is calculated to return the robot to its desired position after the perturbation. This acceleration should be chosen under the stability and friction conditions. To calculate main body acceleration, an optimization problem is defined so that the stability, friction condition considered as its constraints. The simulation results show the effectiveness of the proposed algorithm. The robot can restore its balance against the large disturbance solely through the adjustment of the position and orientation of main body.",
"title": ""
},
{
"docid": "b37a2f3acae914632d6990df427be2c2",
"text": "Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (Arora et al., 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 “discourse atoms” that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.",
"title": ""
},
{
"docid": "2f7a0eaf15515a9cf8cbbebc4d734072",
"text": "Rifampicin (Rif) is one of the most potent and broad spectrum antibiotics against bacterial pathogens and is a key component of anti-tuberculosis therapy, stemming from its inhibition of the bacterial RNA polymerase (RNAP). We determined the crystal structure of Thermus aquaticus core RNAP complexed with Rif. The inhibitor binds in a pocket of the RNAP beta subunit deep within the DNA/RNA channel, but more than 12 A away from the active site. The structure, combined with biochemical results, explains the effects of Rif on RNAP function and indicates that the inhibitor acts by directly blocking the path of the elongating RNA when the transcript becomes 2 to 3 nt in length.",
"title": ""
},
{
"docid": "8442995acf05044fc74817802c99ea1a",
"text": "Fumaric acid is a platform chemical with many applications in bio-based chemical and polymer production. Fungal cell morphology is an important factor that affects fumaric acid production via fermentation. In the present study, pellet and dispersed mycelia morphology of Rhizopus arrhizus NRRL 2582 was analysed using image analysis software and the impact on fumaric acid production was evaluated. Batch experiments were carried out in shake flasks using glucose as carbon source. The highest fumaric acid yield of 0.84 g/g total sugars was achieved in the case of dispersed mycelia with a final fumaric acid concentration of 19.7 g/L. The fumaric acid production was also evaluated using a nutrient rich feedstock obtained from soybean cake, as substitute of the commercial nitrogen sources. Solid state fermentation was performed in order to produce proteolytic enzymes, which were utilised for soybean cake hydrolysis. Batch fermentations were conducted using 50 g/L glucose and soybean cake hydrolysate achieving up to 33 g/L fumaric acid concentration. To the best of our knowledge the influence of R. arrhizus morphology on fumaric acid production has not been reported previously. The results indicated that dispersed clumps were more effective in fumaric acid production than pellets and renewable resources could be alternatively valorised for the biotechnological production of platform chemicals.",
"title": ""
},
{
"docid": "339aa2d53be2cf1215caa142ad5c58d2",
"text": "A true random number generator (TRNG) is an important component in cryptographic systems. Designing a fast and secure TRNG in an FPGA is a challenging task. In this paper we analyze the TRNG designed by Sunar et al. based on XOR of the outputs of many oscillator rings. We propose an enhanced TRNG that does not require post-processing to pass statistical tests and with better randomness characteristics on the output. We have shown by experiment that the frequencies of the equal length oscillator rings in the TRNG are not identical but different due to the placement of the inverters in the FPGA. We have implemented our proposed TRNG in an Altera Cyclone II FPGA. Our implementation has passed the NIST and DIEHARD statistical tests with a throughput of 100 Mbps and with a usage of less than 100 logic elements in the FPGA.",
"title": ""
},
{
"docid": "34ea56262e83b63a6e08591ae86b03ef",
"text": "This article focuses on the variants and imaging pitfalls in the ankle and foot.",
"title": ""
},
{
"docid": "11ad0993b62e016175638d80f9acd694",
"text": "Progressive macular hypomelanosis (PMH) is a skin disorder that is characterized by hypopigmented macules and usually seen in young adults. The skin microbiota, in particular the bacterium Propionibacterium acnes, is suggested to play a role. Here, we compared the P. acnes population of 24 PMH lesions from eight patients with corresponding nonlesional skin of the patients and matching control samples from eight healthy individuals using an unbiased, culture-independent next-generation sequencing approach. We also compared the P. acnes population before and after treatment with a combination of lymecycline and benzoylperoxide. We found an association of one subtype of P. acnes, type III, with PMH. This type was predominant in all PMH lesions (73.9% of reads in average) but only detected as a minor proportion in matching control samples of healthy individuals (14.2% of reads in average). Strikingly, successful PMH treatment is able to alter the composition of the P. acnes population by substantially diminishing the proportion of P. acnes type III. Our study suggests that P. acnes type III may play a role in the formation of PMH. Furthermore, it sheds light on substantial differences in the P. acnes phylotype distribution between the upper and lower back and abdomen in healthy individuals.",
"title": ""
},
{
"docid": "5cb5698cd97daa9da2f94f88dc59e8e7",
"text": "Inadvertent exposure of sensitive data is a major concern for potential cloud customers. Much focus has been on other data leakage vectors, such as side channel attacks, while issues of data disposal and assured deletion have not received enough attention to date. However, data that is not properly destroyed may lead to unintended disclosures, in turn, resulting in heavy financial penalties and reputational damage. In non-cloud contexts, issues of incomplete deletion are well understood. To the best of our knowledge, to date, there has been no systematic analysis of assured deletion challenges in public clouds.\n In this paper, we aim to address this gap by analysing assured deletion requirements for the cloud, identifying cloud features that pose a threat to assured deletion, and describing various assured deletion challenges. Based on this discussion, we identify future challenges for research in this area and propose an initial assured deletion architecture for cloud settings. Altogether, our work offers a systematization of requirements and challenges of assured deletion in the cloud, and a well-founded reference point for future research in developing new solutions to assured deletion.",
"title": ""
},
{
"docid": "7252372bdacaa69b93e52a7741c8f4c2",
"text": "This paper introduces a novel type of actuator that is investigated by ESA for force-reflection to a wearable exoskeleton. The actuator consists of a DC motor that is relocated from the joint by means of Bowden cable transmissions. The actuator shall support the development of truly ergonomic and compact wearable man-machine interfaces. Important Bowden cable transmission characteristics are discussed, which dictate a specific hardware design for such an actuator. A first prototype is shown, which was used to analyze these basic characteristics of the transmissions and to proof the overall actuation concept. A second, improved prototype is introduced, which is currently used to investigate the achievable performance as a master actuator in a master-slave control with force-feedback. Initial experimental results are presented, which show good actuator performance in a 4 channel control scheme with a slave joint. The actuator features low movement resistance in free motion and can reflect high torques during hard contact situations. High contact stability can be achieved. The actuator seems therefore well suited to be implemented into the ESA exoskeleton for space-robotic telemanipulation",
"title": ""
},
{
"docid": "eb0a907ad08990b0fe5e2374079cf395",
"text": "We examine whether tolerance for failure spurs corporate innovation based on a sample of venture capital (VC) backed IPO firms. We develop a novel measure of VC investors’ failure tolerance by examining their tendency to continue investing in a venture conditional on the venture not meeting milestones. We find that IPO firms backed by more failure-tolerant VC investors are significantly more innovative. A rich set of empirical tests shows that this result is not driven by the endogenous matching between failure-tolerant VCs and startups with high exante innovation potentials. Further, we find that the marginal impact of VC failure tolerance on startup innovation varies significantly in the cross section. Being financed by a failure-tolerant VC is much more important for ventures that are subject to high failure risk. Finally, we examine the determinants of the cross-sectional heterogeneity in VC failure tolerance. We find that both capital constraints and career concerns can negatively distort VC failure tolerance. We also show that younger and less experienced VCs are more exposed to these distortions, making them less failure tolerant than more established VCs.",
"title": ""
},
{
"docid": "198944af240d732b6fadcee273c1ba18",
"text": "This paper presents a fast and energy-efficient current mirror based level shifter with wide shifting range from sub-threshold voltage up to I/O voltage. Small delay and low power consumption are achieved by addressing the non-full output swing and charge sharing issues in the level shifter from [4]. The measurement results show that the proposed level shifter can convert from 0.21V up to 3.3V with significantly improved delay and power consumption over the existing level shifters. Compared with [4], the maximum reduction of delay, switching energy and leakage power are 3X, 19X, 29X respectively when converting 0.3V to a higher voltage between 0.6V and 3.3V.",
"title": ""
},
{
"docid": "8bae8e7937f4c9a492a7030c62d7d9f4",
"text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.",
"title": ""
},
{
"docid": "215b02216c68ba6eb2d040e8e01c1ac1",
"text": "Numerous companies are expecting their knowledge management (KM) to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. The KM strategy selection is a kind of multiple criteria decision-making (MCDM) problem, which requires considering a large number of complex factors as multiple evaluation criteria. A robust MCDM method should consider the interactions among criteria. The analytic network process (ANP) is a relatively new MCDM method which can deal with all kinds of interactions systematically. Moreover, the Decision Making Trial and Evaluation Laboratory (DEMATEL) not only can convert the relations between cause and effect of criteria into a visual structural model, but also can be used as a way to handle the inner dependences within a set of criteria. Hence, this paper proposes an effective solution based on a combined ANP and DEMATEL approach to help companies that need to evaluate and select KM strategies. Additionally, an empirical study is presented to illustrate the application of the proposed method. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
f519a3b66048ab8c34aaaea4b3f8d830
|
Credit Default Mining Using Combined Machine Learning and Heuristic Approach
|
[
{
"docid": "e404699c5b86d3a3a47a1f3d745eecc1",
"text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.",
"title": ""
}
] |
[
{
"docid": "18140fdf4629a1c7528dcd6060f427c3",
"text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.",
"title": ""
},
{
"docid": "db208720fce72a768bddb83c9b29f5f8",
"text": "Correspondence to: James A Deddens, National Institute for Occupational Safety and Health, Mail Stop R15, 4676 Columbia Parkway, Cincinnati, OH 45226, USA; jad0@cdc.gov Recently there has been much interest in estimating the prevalence (risk, proportion or probability) ratio instead of the odds ratio, especially in occupational health studies involving common outcomes (for example, with prevalence rates above 10%). For example, if 80 out of 100 exposed subjects have a particular disease and 50 out of 100 non-exposed subjects have the disease, then the odds ratio (OR) is (80/20)/(50/50) = 4. However, the prevalence ratio (PR) is (80/100)/(50/ 100) = 1.6. The latter indicates that the exposed subjects are only 1.6 times as likely to have the disease as the non-exposed subjects, and this is the number in which most people would be interested. There is considerable literature on the advantages and disadvantages of OR versus PR (see Greenland, Stromberg, Axelson et al and others). In this article we will review the existing methods and give examples and recommendations on how to estimate the PR. The most common method of modelling binomial (no/yes or 0/1) health outcomes today is logistic regression. In logistic regression one models the probability of the binomial outcome (Y = 1) of interest as:",
"title": ""
},
{
"docid": "6e92e6eda1bb54dffcdf9c165a487e29",
"text": "Balanced Scorecard is considered as the world widely used Performance Management System by organizations. Around 57% organizations of the world are using the Balanced Scorecard tool for improving their Organizational Performance [1]. This technique of performance evaluation and management was coined by the Kaplan and Norton in 1992. From that date to 2012 a lot of work has been done by the academicians and practitioner on the Balanced Scorecard. This study is summarizing the major studies conducted on Balanced Scorecard from 1992 to 2012. Summing up all the criticism and appreciations on Balanced Scorecard, the study is suggesting some guidelines for improving the Balanced Scorecard in the light of previous researches conducted on Balanced",
"title": ""
},
{
"docid": "5871d4c7eff6523e129e467ccc92ab36",
"text": "The exquisite mechanical functionality and versatility of the human hand emerges from complex neuro-musculo-skeletal interactions that are not completely understood. I have found it useful to work within a theoretical/experimental paradigm that outlines the fundamental neuro-musculo-skeletal components and their interactions. In this integrative paradigm, the laws of mechanics, the specifications of the manipulation task, and the sensorimotor signals define the interactions among hand anatomy, the nervous system, and manipulation function. Thus, our collaborative research activities emphasize a firm grounding in the mechanics of finger function, insistence on anatomical detail, and meticulous characterization of muscle activity. This overview of our work on precision pinch (i.e., the ability to produce and control fingertip forces) presents some of our findings around three Research Themes: Mechanics-based quantification of manipulation ability; Anatomically realistic musculoskeletal finger models; and Neural control of finger muscles. I conclude that (i) driving the fingers to some limit of sensorimotor performance is instrumental to elucidating motor control strategies; (ii) that the cross-over of tendons from flexors to extensors in the extensor mechanism is needed to produce force in every direction, and (iii) the anatomical routing of multiarticular muscles makes co-contraction unavoidable for many tasks. Moreover, creating realistic and clinically useful finger models still requires developing new computational means to simulate the viscoelastic tendinous networks of the extensor mechanism, and the muscle-bone-ligament interactions in complex articulations. Building upon this neuromuscular biomechanics paradigm is of immense clinical relevance: it will be instrumental to the development of clinical treatments to preserve and restore manual ability in people suffering from neurological and orthopedic conditions. This understanding will also advance the design and control of robotic hands whose performance lags far behind that of their biological counterparts.",
"title": ""
},
{
"docid": "805fe4eea0e9415f8683f1135b135059",
"text": "In machine translation, information on word ambiguities is usually provided by the lexicographers who construct the lexicon. In this paper we propose an automatic method for word sense induction, i.e. for the discovery of a set of sense descriptors to a given ambiguous word. The approach is based on the statistics of the distributional similarity between the words in a corpus. Our algorithm works as follows: The 20 strongest first-order associations to the ambiguous word are considered as sense descriptor candidates. All pairs of these candidates are ranked according to the following two criteria: First, the two words in a pair should be as dissimilar as possible. Second, although being dissimilar their co-occurrence vectors should add up to the co-occurrence vector of the ambiguous word scaled by two. Both conditions together have the effect that preference is given to pairs whose co-occurring words are complementary. For best results, our implementation uses singular value decomposition, entropy-based weights, and second-order similarity metrics.",
"title": ""
},
{
"docid": "5a3595adacf256822c8719981ee11cca",
"text": "This paper deals with the most recent video coding standard H.265/HEVC. HEVC introduces new coding tools compared to older standards. Such a change has to influence the encoder complexity and will have influence on the encoding speed. In this paper, we focus on the performance of different implementations of the HEVC. The encoders are compared on the basis of final video quality and encoding speed. As the results showed, the differences in coding speed may be very significant among the encoders.",
"title": ""
},
{
"docid": "9e648d8a00cb82489e1b2cd0991f2fbd",
"text": "In this work, we propose and evaluate generic hardware countermeasures against DPA attacks for recent FPGA devices. The proposed set of FPGA-specific countermeasures can be combined to resist a large variety of first-order DPA attacks, even with 100 million recorded power traces. This set includes generic and resource-efficient countermeasures for on-chip noise generation, random-data processing delays and S-box scrambling using dual-ported block memories. In particular, it is possible to build many of these countermeasures into a single IP-core or hard macro that then provides basic protection for any cryptographic implementation just by its inclusion in the design process – what is particularly useful for engineers with no or little background on IT security and SCA attacks.",
"title": ""
},
{
"docid": "f7310780c48a9b32cfa39a1961ab7648",
"text": "An algorithm for filtering information based on the Pearson χ test approach has been implemented and tested on feature selection. This test is frequently used in biomedical data analysis and should be used only for nominal (discretiz ed) features. This algorithm has only one parameter, statistical confidence level that two di stributions are identical. Empirical comparisons with four other state-of-the-art features selection algorithms (FCBF, CorrSF, ReliefF and ConnSF) are very encouraging.",
"title": ""
},
{
"docid": "0eaee4f37754d0137de78cf1b4d8d950",
"text": "Outlier detection is an important task in data mining with numerous applications, including credit card fraud detection, video surveillance, etc. Outlier detection has been widely focused and studied in recent years. The concept about outlier factor of object is extended to the case of cluster. Although many outlier detection algorithms have been proposed, most of them face the top-n problem, i.e., it is difficult to know how many points in a database are outliers. In this paper we propose a novel outlier cluster detection algorithm called ROCF based on the concept of mutual neighbor graph and on the idea that the size of outlier clusters is usually much smaller than the normal clusters. ROCF can automatically figure out the outlier rate of a database and effectively detect the outliers and outlier clusters without top-n parameter. The formal analysis and experiments show that this method can achieve good performance in outlier detection. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3a5ef0db1fbbebd7c466a3b657e5e173",
"text": "Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.",
"title": ""
},
{
"docid": "66d92a15cb31aaef765c74a1d5a86249",
"text": "In this paper, we present a new open-source software library, Gl-learning, for grammatical inference. The rise of new application scenarios in recent years has required optimized methods to address knowledge extraction from huge amounts of data and to model highly complex systems. Our library implements the main state-of-the-art algorithms in the grammatical inference field (RPNI, EDSM, L*), redesigned through the OpenMP library for a parallel execution that drastically decreases execution times. To our best knowledge, it is also the first comprehensive library including a noise tolerance learning algorithm, such as Blue*, that significantly broadens the range of the potential application scenarios for grammar models. The modular design of our C++ library makes it an efficient and extensible framework for the design of further novel algorithms.",
"title": ""
},
{
"docid": "f700b168c98d235a7fb76581cc24717f",
"text": "It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found that audio-visual approach based on lipsync inconsistency detection was not able to distinguish Deepfake videos. The best performing method, which is based on visual quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality Deepfakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.",
"title": ""
},
{
"docid": "9524269df0e8fbae27ee4e63d47b327b",
"text": "The quantum of power that a given EHVAC transmission line can safely carry depends on various limits. These limits can be categorized into two types viz. thermal and stability/SIL limits. In case of long lines the capacity is limited by its SIL level only which is much below its thermal capacity due to large inductance. Decrease in line inductance and surge impedance shall increase the SIL and transmission capacity. This paper presents a mathematical model of increasing the SIL level towards thermal limit. Sensitivity of SIL on various configuration of sub-conductors in a bundle, bundle spacing, tower structure, spacing of phase conductors etc. is analyzed and presented. Various issues that need attention for application of high surge impedance loading (HSIL) line are also deliberated",
"title": ""
},
{
"docid": "73ec43c5ed8e245d0a1ff012a6a67f76",
"text": "HERE IS MUCH signal processing devoted to detection and estimation. Detection is the task of detetmitdng if a specific signal set is pteaettt in an obs&tion, whflc estimation is the task of obtaining the va.iues of the parameters derriblng the signal. Often the s@tal is complicated or is corrupted by interfeting signals or noise To facilitate the detection and estimation of signal sets. the obsenation is decomposed by a basis set which spans the signal space [ 1) For many problems of engineering interest, the class of aigttlls being sought are periodic which leads quite natuallv to a decomposition by a basis consistittg of simple petiodic fun=tions, the sines and cosines. The classic Fourier tran.,fot,,, h the mechanism by which we M able to perform this decomposttmn. BY necessity, every observed signal we pmmust be of finite extent. The extent may be adjustable and Axtable. but it must be fire. Proces%ng a fiite-duration observation ~POSCS mteresting and interacting considentior,s on the hamomc analysic rhese consldentions include detectability of tones in the Presence of nearby strong tones, rcoohability of similarstrength nearby tones, tesolvability of Gxifting tona, and biases in estimating the parameten of my of the alonmenhoned signals. For practicality, the data we pare N unifomdy spaced samples of the obsetvcd signal. For convenience. N is highJy composite, and we will zwtme N is evett. The harmottic estm~afes we obtain UtmugJt the discrae Fowie~ tmnsfotm (DFT) arc N mifcwmly spaced samples of the asaciated periodic spectra. This approach in elegant and attnctive when the proce~ scheme is cast as a spectral decomposition in an N-dimensional orthogonal vector space 121. Unfottunately, in mmY practical situations, to obtain meaningful results this elegance must be compmmised. One such t=O,l;..,Nl.N.N+l.",
"title": ""
},
{
"docid": "b8d63090ea7d3302c71879ea4d11fde5",
"text": "We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method. We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point. Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.",
"title": ""
},
{
"docid": "5048a090adfdd3ebe9d9253ca4f72644",
"text": "Movement disorders or extrapyramidal symptoms (EPS) associated with selective serotonin reuptake inhibitors (SSRIs) have been reported. Although akathisia was found to be the most common EPS, and fluoxetine was implicated in the majority of the adverse reactions, there were also cases with EPS due to sertraline treatment. We present a child and an adolescent who developed torticollis (cervical dystonia) after using sertraline. To our knowledge, the child case is the first such report of sertraline-induced torticollis, and the adolescent case is the third in the literature.",
"title": ""
},
{
"docid": "d2d1f14ca3370d9d87f4d38dd95a7c3b",
"text": "Dissidents, journalists, and others require technical means to protect their privacy in the face of compelled access to their digital devices (smartphones, laptops, tablets, etc.). For example, authorities increasingly force disclosure of all secrets, including passwords, to search devices upon national border crossings. We therefore present the design, implementation, and evaluation of a new system to help victims of compelled searches. Our system, called BurnBox, provides self-revocable encryption: the user can temporarily disable their access to specific files stored remotely, without revealing which files were revoked during compelled searches, even if the adversary also compromises the cloud storage service. They can later restore access. We formalize the threat model and provide a construction that uses an erasable index, secure erasure of keys, and standard cryptographic tools in order to provide security supported by our formal analysis. We report on a prototype implementation, which showcases the practicality of BurnBox.",
"title": ""
},
{
"docid": "176a982a60e302dcdd50484562dec7ce",
"text": "The palatine aponeurosis is a thin, fibrous lamella comprising the extended tendons of the tensor veli palatini muscles, attached to the posterior border and inferior surface of the palatine bone. In dentistry, the relationship between the “vibrating line” and the border of the hard and soft palate has long been discussed. However, to our knowledge, there has been no discussion of the relationship between the palatine aponeurosis and the vibrating line(s). Twenty sides from ten fresh frozen White cadaveric heads (seven males and three females) whose mean age at death was 79 years) were used in this study. The thickness of the mucosa including the submucosal tissue was measured. The maximum length of the palatine aponeurosis on each side and the distance from the posterior nasal spine to the posterior border of the palatine aponeurosis in the midline were also measured. The relationship between the marked borderlines and the posterior border of the palatine bone was observed. The thickness of the mucosa and submucosal tissue on the posterior nasal spine and the maximum length of the palatine aponeurosis were 3.4 mm, and 12.2 mm on right side and 12.8 mm on left, respectively. The length of the palatine aponeurosis in the midline was 4.9 mm. In all specimens, the borderline between the compressible and incompressible parts corresponded to the posterior border of the palatine bone.",
"title": ""
},
{
"docid": "c12cd99e8f1184fb77c7027c71a8dace",
"text": "This paper reports on a wearable gesture-based controller fabricated using the sensing capabilities of the flexible thin-film piezoelectric polymer polyvinylidene fluoride (PVDF) which is shown to repeatedly and accurately discern, in real time, between right and left hand gestures. The PVDF is affixed to a compression sleeve worn on the forearm to create a wearable device that is flexible, adaptable, and highly shape conforming. Forearm muscle movements, which drive hand motions, are detected by the PVDF which outputs its voltage signal to a developed microcontroller-based board and processed by an artificial neural network that was trained to recognize the generated voltage profile of right and left hand gestures. The PVDF has been spatially shaded (etched) in such a way as to increase sensitivity to expected deformations caused by the specific muscles employed in making the targeted right and left gestures. The device proves to be exceptionally accurate both when positioned as intended and when rotated and translated on the forearm.",
"title": ""
}
] |
scidocsrr
|
d2cd24f91a7ac06902f8a042c6bb50dd
|
Fuzzy set approach for automatic tagging in evolving software
|
[
{
"docid": "bdc057a6d2b9be79ae64983c5e429db7",
"text": "ion and Traceability\", Proceedings of the IEEE International Requirements Engineering Conference, Germany, September 2002. 70. Watkins R, Neal M, \"Why and How of Requirements Tracing\", IEEE Software, 104-106, July 1994",
"title": ""
}
] |
[
{
"docid": "b5c53afda0b8af1ecd1e973dd7cdd101",
"text": "MOTIVATION\nProtein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction.\n\n\nMETHOD\nThis paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question.\n\n\nRESULTS\nOur method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then.\n\n\nAVAILABILITY\nhttp://raptorx.uchicago.edu/ContactMap/.",
"title": ""
},
{
"docid": "1f8af42bee4a15d76900d3b69628213f",
"text": "This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.",
"title": ""
},
{
"docid": "bf17acf28f242a0fd76117c9ef245f4d",
"text": "We present an algorithm to compute the silhouette set of a point cloud. Previous methods extract point set silhouettes by thresholding point normals, which can lead to simultaneous overand under-detection of silhouettes. We argue that additional information such as surface curvature is necessary to resolve these issues. To this end, we develop a local reconstruction scheme using Gabriel and intrinsic Delaunay criteria and define point set silhouettes based on the notion of a silhouette generating set. The mesh umbrellas, or local reconstructions of one-ring triangles surrounding each point sample, generated by our method enable accurate silhouette identification near sharp features and close-by surface sheets, and provide the information necessary to detect other characteristic curves such as creases and boundaries. We show that these curves collectively provide a sparse and intuitive visualization of point cloud data.",
"title": ""
},
{
"docid": "b1d2ff76f8b4437a731ef5ccdb46429f",
"text": "Form, function and the relationship between the two are notions that have served a crucial role in design science. Within architectural design, key aspects of the anticipated function of buildings, or of spatial environments in general, are supposed to be determined by their structural form, i.e., their shape, layout, or connectivity. Whereas the philosophy of form and function is a well-researched topic, the practical relations and dependencies between form and function are only known implicitly by designers and architects. Specifically, the formal modelling of structural form and resulting artefactual function within design and design assistance systems remains elusive. In our work, we aim at making these definitions explicit by the ontological modelling of domain entities, their properties and related constraints. We thus have to particularly focus on formal interpretation of the terms “(structural) form” and “(artefactual) function”. We put these notions into practice by formalising ontological specifications accordingly by using modularly constructed ontologies for the architectural design domain. A key aspect of our modelling approach is the use of formal qualitative spatial calculi and conceptual requirements as a link between the structural form of a design and the differing functional capabilities that it affords or leads to. We demonstrate the manner in which our ontological modelling reflects notions of architectural form and function, and how it facilitates the conceptual modelling of requirement constraints for architectural design.",
"title": ""
},
{
"docid": "c208270148481523a55620e634937668",
"text": "We propose DropMax, a stochastic version of softmax classifier which at each iteration drops non-target classes with some probability, for each instance. Specifically, we overlay binary masking variables over class output probabilities, which are learned based on the input via regularized variational inference. This stochastic regularization has an effect of building an ensemble classifier out of exponential number of classifiers with different decision boundaries. Moreover, the learning of dropout probabilities for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes. We validate our model on multiple public datasets for classification, on which it obtains improved accuracy over regular softmax classifier and other baselines. Further analysis of the learned dropout masks shows that our model indeed selects confusing classes more often when it performs classification.",
"title": ""
},
{
"docid": "cdc3a11e556cb73f5629135cbb5f0527",
"text": "Reinforcement learning methods are often considered as a potential solution to enable a robot to adapt to changes in real time to an unpredictable environment. However, with continuous action, only a few existing algorithms are practical for real-time learning. In such a setting, most effective methods have used a parameterized policy structure, often with a separate parameterized value function. The goal of this paper is to assess such actor-critic methods to form a fully specified practical algorithm. Our specific contributions include 1) developing the extension of existing incremental policy-gradient algorithms to use eligibility traces, 2) an empirical comparison of the resulting algorithms using continuous actions, 3) the evaluation of a gradient-scaling technique that can significantly improve performance. Finally, we apply our actor-critic algorithm to learn on a robotic platform with a fast sensorimotor cycle (10ms). Overall, these results constitute an important step towards practical real-time learning control with continuous action.",
"title": ""
},
{
"docid": "bb770a0cb686fbbb4ea1adb6b4194967",
"text": "Parental refusal of vaccines is a growing a concern for the increased occurrence of vaccine preventable diseases in children. A number of studies have looked into the reasons that parents refuse, delay, or are hesitant to vaccinate their child(ren). These reasons vary widely between parents, but they can be encompassed in 4 overarching categories. The 4 categories are religious reasons, personal beliefs or philosophical reasons, safety concerns, and a desire for more information from healthcare providers. Parental concerns about vaccines in each category lead to a wide spectrum of decisions varying from parents completely refusing all vaccinations to only delaying vaccinations so that they are more spread out. A large subset of parents admits to having concerns and questions about childhood vaccinations. For this reason, it can be helpful for pharmacists and other healthcare providers to understand the cited reasons for hesitancy so they are better prepared to educate their patients' families. Education is a key player in equipping parents with the necessary information so that they can make responsible immunization decisions for their children.",
"title": ""
},
{
"docid": "2d34d9e9c33626727734766a9951a161",
"text": "In this paper, we propose and study the use of alternating direction algorithms for several `1-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the `1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an `1-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the `1-norm fidelity should be the fidelity of choice in compressive sensing.",
"title": ""
},
{
"docid": "20d1cb8d2f416c1dc07e5a34c2ec43ba",
"text": "Significant research and development of algorithms in intelligent transportation has grabbed more attention in recent years. An automated, fast, accurate and robust vehicle plate recognition system has become need for traffic control and law enforcement of traffic regulations; and the solution is ANPR. This paper is dedicated on an improved technique of OCR based license plate recognition using neural network trained dataset of object features. A blended algorithm for recognition of license plate is proposed and is compared with existing methods for improve accuracy. The whole system can be categorized under three major modules, namely License Plate Localization, Plate Character Segmentation, and Plate Character Recognition. The system is simulated on 300 national and international motor vehicle LP images and results obtained justifies the main requirement.",
"title": ""
},
{
"docid": "3f919d8a9fa34d7c06b40f012bca52bb",
"text": "Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for representation learning and classification. However, the mathematical reasons for this success remain elusive. This tutorial will review recent work that aims to provide a mathematical justification for several properties of deep networks, such as global optimality, geometric stability, and invariance of the learned representations.",
"title": ""
},
{
"docid": "1eb2dcb1c5c1fb88e3f6a3b80fbf31d5",
"text": "For years, researchers and practitioners have primarily investigated the various processes within manufacturing supply chains individually. Recently, however, there has been increasing attention placed on the performance, design, and analysis of the supply chain as a whole. This attention is largely a result of the rising costs of manufacturing, the shrinking resources of manufacturing bases, shortened product life cycles, the leveling of the playing field within manufacturing, and the globalization of market economies. The objectives of this paper are to: (1) provide a focused review of literature in multi-stage supply chain modeling and (2) define a research agenda for future research in this area.",
"title": ""
},
{
"docid": "f8b45d97d18dcd9581df259fff1826c3",
"text": "Telemedicine programs provide specialty health services to remote populations using telecommunications technology. This innovative approach to medical care delivery has been expanding for several years and currently covers various specialty areas such as cardiology, dermatology, and pediatrics. Economic evaluations of telemedicine, however, remain rare, and few of those conducted have accounted for the wide range of economic costs and benefits. Rigorous benefit-cost analyses of telemedicine programs could provide credible and comparative evidence of their economic viability and thus lead to the adoption and/or expansion of the most successful programs. To facilitate more advanced economic evaluations, this article presents research guidelines for conducting benefit-cost analyses of telemedicine programs, emphasizing opportunity cost estimation, commonly used program outcomes, and monetary conversion factors to translate outcomes to dollar values. The article concludes with specific recommendations for future research.",
"title": ""
},
{
"docid": "9c717907ec6af9a4edebae84e71ef3f1",
"text": "We study a model of fairness in secure computation in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty. We then show how the Bitcoin network can be used to achieve the above notion of fairness in the two-party as well as the multiparty setting (with a dishonest majority). In particular, we propose new ideal functionalities and protocols for fair secure computation and fair lottery in this model. One of our main contributions is the definition of an ideal primitive, which we call F CR (CR stands for “claim-or-refund”), that formalizes and abstracts the exact properties we require from the Bitcoin network to achieve our goals. Naturally, this abstraction allows us to design fair protocols in a hybrid model in which parties have access to the F CR functionality, and is otherwise independent of the Bitcoin ecosystem. We also show an efficient realization of F CR that requires only two Bitcoin transactions to be made on the network. Our constructions also enjoy high efficiency. In a multiparty setting, our protocols only require a constant number of calls to F CR per party on top of a standard multiparty secure computation protocol. Our fair multiparty lottery protocol improves over previous solutions which required a quadratic number of Bitcoin transactions.",
"title": ""
},
{
"docid": "754dc26aa595c2c759a34540af369eac",
"text": "In recent years, the increasing popularity of outsourcing data to third-party cloud servers sparked a major concern towards data breaches. A standard measure to thwart this problem and to ensure data confidentiality is data encryption. Nevertheless, organizations that use traditional encryption techniques face the challenge of how to enable untrusted cloud servers perform search operations while the actually outsourced data remains confidential. Searchable encryption is a powerful tool that attempts to solve the challenge of querying data outsourced at untrusted servers while preserving data confidentiality. Whereas the literature mainly considers searching over an unstructured collection of files, this paper explores methods to execute SQL queries over encrypted databases. We provide a complete framework that supports private search queries over encrypted SQL databases, in particular for PostgreSQL and MySQL databases. We extend the solution for searchable encryption designed by Curtmola et al., to the case of SQL databases. We also provide features for evaluating range and boolean queries. We finally propose a framework for implementing our construction, validating its",
"title": ""
},
{
"docid": "e548d342a2578add2a8bb12c42f4e465",
"text": "Industry-proven field-weakening solutions for nonsalient-pole permanent-magnet synchronous motor drives are presented in this paper. The core algorithm relies on direct symbolic equations. The equations take into account the stator resistance and reveal its effect on overall algorithm quality. They establish a foundation for an offline calculated lookup table which secures effective d-axis current reference over entire field-weakening region. The table has been proven on its own and in combination with a PI compensator. Usage recommendations are given in this paper. Functionality of the proposed solutions has been investigated theoretically and in practice. The investigation has been carried out in the presence of motor magnetic saturation and parameter tolerance, taking into account the change of operating temperature. The results and analysis method are included in this paper.",
"title": ""
},
{
"docid": "2df622d64c88d1dffae9fe481a196e86",
"text": "Wireless technology has been gaining rapid popularity for some years. Adaptation of a standard depends on the ease of use and level of security it provides. In this case, contrast between wireless usage and security standards show that the security is not keeping up with the growth paste of end user’s usage. Current wireless technologies in use allow hackers to monitor and even change the integrity of transmitted data. Lack of rigid security standards has caused companies to invest millions on securing their wireless networks. There are three major types of security standards in wireless. In our previous paper which was presented in ICFCC2009 Conference in Kuala Lumpur and published by IEEE Computer Society [1], we explained the structure of WEP as a first wireless security standard and discussed all its versions, problems and improvements. Now, we try to explain all of WPA versions and problems with the best solutions and finally make a comparison between WEP and WPA. Then we are in the next phase which is to explain the structure of last standard (WPA2) and we hope that we will publish a complete comparison among wireless security techniques in the near future and recommend a new proposal as a new protocol.",
"title": ""
},
{
"docid": "fdbdac5f319cd46aeb73be06ed64cbb9",
"text": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification. This paper presents a convolutional time-delay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce high-quality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a longtime distributional pattern, and therefore can be extracted from just dozens of frames.",
"title": ""
},
{
"docid": "6ddb475ef1529ab496ab9f40dc51cb99",
"text": "While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.",
"title": ""
},
{
"docid": "91fe4479366c04a906463fb7361bdcfd",
"text": "The natural numbers may be our simplest, most useful, and best-studied abstract concepts, but their origins are debated. I consider this debate in the context of the proposal, by Gallistel and Gelman, that natural number system is a product of cognitive evolution and the proposal, by Carey, that it is a product of human cultural history. I offer a third proposal that builds on aspects of these views but rejects one tenet that they share: the thesis that counting is central to number. I suggest that children discover the natural numbers when they learn a natural language: especially nouns, number words, and the rules that compose quantified noun phrases. This learning, in turn, depends both on cognitive systems that are innate and shared by other animals, and on our species-specific language faculty. Thus, natural number concepts are unique to humans and culturally universal, yet they are learned. Natural number concepts may be our simplest abstract ideas. These concepts are exceedingly useful, serving as a basis for measurement, money, and mathematics, orienting us in space and time, and structuring activities from sports to elections. Likely because of their simplicity and ubiquity, the development of natural number concepts has been richly studied since the landmark research of Jean Piaget (1952) and the enduring challenges to his theory that followed (e.g., Gelman, 1972; Mehler & Bever, 1967; Siegal, 1999). Despite the simplicity of these concepts and the large body of research probing their development, however, the psychological foundations of natural number concepts continue to be debated. In the context of this debate, it is useful to characterize the natural number system in three interconnected ways. First, there is a minimal unit, ONE, that corresponds to the smallest distance separating distinct numbers (hereafter, the UNIT principle). Second, natural numbers can be generated by successive addition of one (hereafter, the principle of SUCCESSION). Third, two sets whose members can be placed in one-to-one correspondence have the same cardinal value: they are equal in number (hereafter, the principle of EXACT EQUALITY). Here I consider three general accounts of the development of this system of concepts, in relation to research that focuses on key aspects of these principles. According to the first account, natural number concepts are part of human nature. They are ancient: they evolved in distant ancestors, and so we share them with other animals. They are innate and begin functioning early in development: their emergence is not shaped by encounters with sets CONTACT Elizabeth S. Spelke spelke@wjh.harvard.edu Department of Psychology, Harvard University, 33 Kirkland St., Cambridge, MA 02138. In the literature on numerical development in children, the acquisition of “the successor principle” refers to a milestone in children’s mastery of counting: the point at which children understand that every word in their verbal count list refers to a cardinal value that is one larger than the value designated by the previous word. In some discussions, mastery of the successor principle of verbal counting is considered as a criterion for mastery of the natural number system (e.g., Sarnecka, 2016). Use of this criterion, however, would beg the present question. Because numerical language is learned, natural number concepts that were defined by children’s learning of number word meanings necessarily would be learned, ruling out Gelman and Gallistel’s nativist claims by definition and rendering Carey’s claims true by definition. On pain of circularity, experiments testing these theories and others require characterizations of numerical concepts that are independent of children’s learning of language. © 2017 Taylor & Francis Group, LLC LANGUAGE LEARNING AND DEVELOPMENT 2017, VOL. 13, NO. 2, 147–170 http://dx.doi.org/10.1080/15475441.2016.1263572 D ow nl oa de d by [ Sm ith so ni an A st ro ph ys ic s O bs er va to ry ] at 1 4: 27 2 5 A ug us t 2 01 7 and their transformations but instead serves to structure those encounters. And, these concepts are present in all human cultures: later development builds on them but does not overturn them, so they are available to children and adults everywhere. According to the second account, natural number concepts depend on a specific product of culture: a counting procedure. These concepts are recent and unique to humans, because the first counting procedure appears to have been invented relatively late in human prehistory. They are learned: indeed, contemporary children master counting procedures slowly and with difficulty. And they are culturally variable: different human groups count in different ways and to different extents, and some groups do not count at all. Drawing on old and new research, I suggest that neither of these theories captures the development of natural number concepts, and I sketch a third account of their emergence. I propose that natural number concepts arise through the productive combination of representations from a set of innate, ancient, and developmentally invariant cognitive systems: systems of core knowledge. In particular, natural number concepts depend on a system for representing sets and their approximate numerical magnitudes (hereafter, the Approximate Number System (ANS)) and a set of systems that collectively serve to represent objects as members of kinds. None of these core systems is unique to humans, but their productive combination depends on the acquisition and use of a natural language. Because both the core systems and the language faculty are universal across humans, and because children master their native language spontaneously, natural number concepts emerge universally, with no formal or informal instruction. Because language is unique to humans, so is our grasp of the natural numbers. Finally, because specific natural languages are learned, the system of natural number concepts is neither innate nor present in the youngest children. I begin by describing the first core system, the ANS. Then I turn to Gelman and Gallistel’s theory, considering both its virtues and the problems it faces in accounting for some prominent limits to the numerical reasoning of young children. These limits suggest that children’s earliest numerical representations fail to capture two of the three key principles that characterize the natural numbers: principles of exact equality and succession. That suggestion, in turn, motivates Carey’s account of the emergence of natural number concepts. After considering some strengths of this account, I turn to findings that raise problems for it. Then I suggest how language learning, rather than either the innate unfolding of a genetic program or the acquisition of culture-specific counting devices, might allow children to discover the natural numbers. A core system of number From newborn infants to professional mathematicians, humans spontaneously represent the cardinal values of sets of objects or sequences of events with ratio-limited precision (see Dehaene, 2011, for review). Experiments conducted on newborn infants serve to illustrate this ability (Izard, Sann, Spelke, & Streri, 2009). Infants in a maternity hospital were familiarized with sequences of syllables. The particular syllables changed from one sequence to the next, as did the duration of the syllable sequences, but the number of syllables was constant—4 long syllables per sequence for half the infants and 12 shorter syllables per sequence for the others. While infants listened to the sequences, alternating visual arrays appeared, containing 4 or 12 objects of variable but comparable shapes and sizes. Infants looked reliably longer at the object array that corresponded in number to the sequence of sounds. Because the visual and auditory arrays differed in modality and format, and could not be matched on the basis of continuous, extensive, or intensive quantities such as contour length, sequence duration, or item size, these looking patterns provide evidence that infants detected the numerical correspondence between these arrays, and therefore the numerical distinction between 4 and 12. Further experiments revealed that newborn infants also distinguish 6 from 18 but not 4 from 8. Beginning in the first days after birth, the ability to discriminate between two numbers depends on their ratio. Do newborn infants relate sequences of syllables to arrays of visual forms through noisy processes of one-to-one correspondence, applied to the individual members of these sets? Alternatively, do 148 E. S. SPELKE D ow nl oa de d by [ Sm ith so ni an A st ro ph ys ic s O bs er va to ry ] at 1 4: 27 2 5 A ug us t 2 01 7 infants represent the sequences and arrays as ensembles with approximate numerical magnitudes, matching sets with similar magnitudes? Several findings indicate that the latter representations underlie infants’ performance. Processing of individual objects shows a set size limit—most adults cannot hold more than four objects in mind at once, and infants are limited to fewer objects than this (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013)–but infants respond to number in much larger arrays. Indeed, infants often fail to represent the numerical sizes of very small sets of objects. For example, newborn infants fail to match sequences of 2 sounds to arrays of 2 rather than 6 objects when tested under the same conditions that reveal matching of sequences of 4 sounds to arrays of 4 rather than 12 objects (Coubart, Izard, Spelke, Marie, & Streri, 2014). When presented with very small sets, infants and adults alike typically focus attention on the individual objects and suppress representations of the set’s numerical magnitude (Hyde & Spelke, 2009, 2011), although both adults and infants can focus on the numerical values of small sets if presentation conditions make individual objects difficult to track (Hyde & Wood, 2011; Starr, Libertus, & Brannon, 20",
"title": ""
},
{
"docid": "d4406b74040e9f06b1d05cefade12c6c",
"text": "Steganography is a science to hide information, it hides a message to another object, and it increases the security of data transmission and archiving it. In the process of steganography, the hidden object in which data is hidden the carrier object and the new object, is called the steganography object. The multiple carriers, such as text, audio, video, image and so can be mentioned for steganography; however, audio has been significantly considered due to the multiplicity of uses in various fields such as the internet. For steganography process, several methods have been developed; including work in the temporary and transformation, each of has its own advantages and disadvantages, and special function. In this paper we mainly review and evaluate different types of audio steganography techniques, advantages and disadvantages.",
"title": ""
}
] |
scidocsrr
|
b66c3fe232f3b4d0ab6eeffd628ed8a1
|
Action understanding as inverse planning
|
[
{
"docid": "32ee8dadf5d8983f40f984f64be37211",
"text": "This paper introduces a model of 'theory of mind', namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a 'game theory of mind'. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a 'stag-hunt'. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution.",
"title": ""
},
{
"docid": "b3ebbff355dfc23b4dfbab3bc3012980",
"text": "Research with young children has shown that, like adults, they focus selectively on the aspects of an actor's behavior that are relevant to his or her underlying intentions. The current studies used the visual habituation paradigm to ask whether infants would similarly attend to those aspects of an action that are related to the actor's goals. Infants saw an actor reach for and grasp one of two toys sitting side by side on a curtained stage. After habituation, the positions of the toys were switched and babies saw test events in which there was a change in either the path of motion taken by the actor's arm or the object that was grasped by the actor. In the first study, 9-month-old infants looked longer when the actor grasped a new toy than when she moved through a new path. Nine-month-olds who saw an inanimate object of approximately the same dimensions as the actor's arm touch the toy did not show this pattern in test. In the second study, 5-month-old infants showed similar, though weaker, patterns. A third study provided evidence that the findings for the events involving a person were not due to perceptual changes in the objects caused by occlusion by the hand. A fourth study replicated the 9 month results for a human grasp at 6 months, and revealed that these effects did not emerge when infants saw an inanimate object with digits that moved to grasp the toy. Taken together, these findings indicate that young infants distinguish in their reasoning about human action and object motion, and that by 6 months infants encode the actions of other people in ways that are consistent with more mature understandings of goal-directed action.",
"title": ""
}
] |
[
{
"docid": "1d72e3bbc8106a8f360c05bd0a638f0d",
"text": "Advancements in computer vision, natural language processing and deep learning techniques have resulted in the creation of intelligent systems that have achieved impressive results in the visually grounded tasks such as image captioning and visual question answering (VQA). VQA is a task that can be used to evaluate a system's capacity to understand an image. It requires an intelligent agent to answer a natural language question about an image. The agent must ground the question into the image and return a natural language answer. One of the latest techniques proposed to tackle this task is the attention mechanism. It allows the agent to focus on specific parts of the input in order to answer the question. In this paper we propose a novel long short-term memory (LSTM) architecture that uses dual attention to focus on specific question words and parts of the input image in order to generate the answer. We evaluate our solution on the recently proposed Visual 7W dataset and show that it performs better than state of the art. Additionally, we propose two new question types for this dataset in order to improve model evaluation. We also make a qualitative analysis of the results and show the strength and weakness of our agent.",
"title": ""
},
{
"docid": "7669e2e334ccc4e24fd05ab59f28d1fc",
"text": "Organizations are increasingly inter-connected as they source talent, goods and services from other organizations located in disparate parts of the world. They seek new ways of creating value for themselves, customers and partners. They operate outside and across traditional industry boundaries and definitions. These innovations have lead to a focus on business models as a fundamental statement of direction and identity. This paper highlights what is known about the business model concept and where and why it differs from more established concepts of business strategy. It illustrates how the application of business models has transformed organizations. The contribution of this paper is the guidance that it provides for business model design and the insight it provides into business models and their effects on organizations. Following an analysis of how business models can transform organizations, this paper concludes with practical recommendations for business model design.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "abcbd831178e1bc5419da8274dc17bbf",
"text": "Most state-of-the-art statistical machine translation systems use log-linear models, which are defined in terms of hypothesis features and weights for those features. It is standard to tune the feature weights in order to maximize a translation quality metric, using heldout test sentences and their corresponding reference translations. However, obtaining reference translations is expensive. In our earlier work (Madnani et al., 2007), we introduced a new full-sentence paraphrase technique, based on English-to-English decoding with an MT system, and demonstrated that the resulting paraphrases can be used to cut the number of human reference translations needed in half. In this paper, we take the idea a step further, asking how far it is possible to get with just a single good reference translation for each item in the development set. Our analysis suggests that it is necessary to invest in four or more human translations in order to significantly improve on a single translation augmented by monolingual paraphrases.",
"title": ""
},
{
"docid": "2ac12ee85207b456e85471a0aa95f3f5",
"text": "Software-based attestation schemes aim at proving the integrity of code and data residing on a platform to a verifying party. However, they do not bind the hardware characteristics to the attestation protocol and are vulnerable to impersonation attacks.\n We present PUFatt, a new automatable method for linking software-based attestation to intrinsic device characteristics by means of a novel processor-based Physically Unclonable Function, which enables secure timed (and even) remote attestation particularly suitable for embedded and low-cost devices. Our proof-of-concept implementation on FPGA demonstrates the effectiveness, applicability and practicability of the approach.",
"title": ""
},
{
"docid": "5a355c69e7f8e4248a63ef83b06b7095",
"text": "The interior permanent magnet (IPM) machine equipped with a fractional-slot concentrated winding (FSCW) has met an increasing interest in electric vehicle applications due to its higher power density and efficiency. Torque production is due to both PM and reluctance torques. However, one of the main challenges of FSCWs is their inability to produce a high-quality magnetomotive force (MMF) distribution, yielding undesirable rotor core and magnet eddy-current losses. Literature shows that the reduction of low-order space harmonics significantly reduces these loss components. Moreover, it has been previously shown that by employing a higher number of layers, although causing some reduction in the winding factor of the torque-producing MMF component, both machine saliency and reluctance torque components are improved. Recently, a dual three-phase winding connected in a star/delta connection has also shown promise to concurrently enhance machine torque of a surface-mounted PM machine while significantly reducing both rotor core and magnet losses. In this paper, a multilayer winding configuration and a dual three-phase winding connection are combined and applied to the well-known 12-slot/10-pole IPM machine with v-shaped magnets. The proposed winding layout is compared with a conventional double-layer winding, a dual three-phase double-layer winding, and a four-layer winding. The comparison is carried out using 2-D finite-element analysis. The comparison shows that the proposed winding layout, while providing similar output torque to a conventional double-layer three-phase winding, offers a significant reduction in core and magnet losses, correspondingly a higher efficiency, improves the machine saliency ratio, and maximizes the reluctance toque component.",
"title": ""
},
{
"docid": "2903e8be6b9a3f8dc818a57197ec1bee",
"text": "A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.",
"title": ""
},
{
"docid": "10bc2f9827aa9a53e3ca4b7188bd91c3",
"text": "Learning hash functions across heterogenous high-dimensional features is very desirable for many applications involving multi-modal data objects. In this paper, we propose an approach to obtain the sparse codesets for the data objects across different modalities via joint multi-modal dictionary learning, which we call sparse multi-modal hashing (abbreviated as SM2H). In SM2H, both intra-modality similarity and inter-modality similarity are first modeled by a hypergraph, then multi-modal dictionaries are jointly learned by Hypergraph Laplacian sparse coding. Based on the learned dictionaries, the sparse codeset of each data object is acquired and conducted for multi-modal approximate nearest neighbor retrieval using a sensitive Jaccard metric. The experimental results show that SM2H outperforms other methods in terms of mAP and Percentage on two real-world data sets.",
"title": ""
},
{
"docid": "5bee27378a98ff5872f7ae5e899f81e2",
"text": "An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework.",
"title": ""
},
{
"docid": "f709153cdc958cc636ac6d68405bc2b0",
"text": "While enormous progress has been made to Variational Autoencoder (VAE) in recent years, similar to other deep networks, VAE with deep networks suffers from the problem of degeneration, which seriously weakens the correlation between the input and the corresponding latent codes, deviating from the goal of the representation learning. To investigate how degeneration affects VAE from a theoretical perspective, we illustrate the information transmission in VAE and analyze the intermediate layers of the encoders/decoders. Specifically, we propose a Fisher Information measure for the layer-wise analysis. With such measure, we demonstrate that information loss is ineluctable in feed-forward networks and causes the degeneration in VAE. We show that skip connections in VAE enable the preservation of information without changing the model architecture. We call this class of VAE equipped with skip connections as SCVAE and perform a range of experiments to show its advantages in information preservation and degeneration mitigation.",
"title": ""
},
{
"docid": "f47841bc67e842102dc72dc8d39d8262",
"text": "Eye gaze estimation systems calculate the direction of human eye gaze. Numerous accurate eye gaze estimation systems considering a user s head movement have been reported. Although the systems allow large head motion, they require multiple devices and complicate computation in order to obtain the geometrical positions of an eye, cameras, and a monitor. The light-reflection-based method proposed in this paper does not require any knowledge of their positions, so the system utilizing the proposed method is lighter and easier to use than the conventional systems. To estimate where the user looks allowing ample head movement, we utilize an invariant value (cross-ratio) of a projective space. Also, a robust feature detection using an ellipse-specific active contour is suggested in order to find features exactly. Our proposed feature detection and estimation method are simple and fast, and shows accurate results under large head motion. 2004 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "f7d06c6f2313417fd2795ce4c4402f0e",
"text": "Decades of research suggest that similarity in demographics, values, activities, and attitudes predicts higher marital satisfaction. The present study examined the relationship between similarity in Big Five personality factors and initial levels and 12-year trajectories of marital satisfaction in long-term couples, who were in their 40s and 60s at the beginning of the study. Across the entire sample, greater overall personality similarity predicted more negative slopes in marital satisfaction trajectories. In addition, spousal similarity on Conscientiousness and Extraversion more strongly predicted negative marital satisfaction outcomes among the midlife sample than among the older sample. Results are discussed in terms of the different life tasks faced by young, midlife, and older adults, and the implications of these tasks for the \"ingredients\" of marital satisfaction.",
"title": ""
},
{
"docid": "f07c06a198547aa576b9a6350493e6d4",
"text": "In this paper we examine the diffusion of competing rumors in social networks. Two players select a disjoint subset of nodes as initiators of the rumor propagation, seeking to maximize the number of persuaded nodes. We use concepts of game theory and location theory and model the selection of starting nodes for the rumors as a strategic game. We show that computing the optimal strategy for both the first and the second player is NP-complete, even in a most restricted model. Moreover we prove that determining an approximate solution for the first player is NP-complete as well. We analyze several heuristics and show that—counter-intuitively—being the first to decide is not always an advantage, namely there exist networks where the second player can convince more nodes than the first, regardless of the first player’s decision.",
"title": ""
},
{
"docid": "a2d851b76d6abcb3d9377c566b8bf6d9",
"text": "Many fabrication processes for polymeric objects include melt extrusion, in which the molten polymer is conveyed by a ram or a screw and the melt is then forced through a shaping die in continuous processing or into a mold for the manufacture of discrete molded parts. The properties of the fabricated solid object, including morphology developed during cooling and solidification, depend in part on the stresses and orientation induced during the melt shaping. Most polymers used for commercial processing are of sufficiently high molecular weight that the polymer chains are highly entangled in the melt, resulting in flow behavior that differs qualitatively from that of low-molecular-weight liquids. Obvious manifestations of the differences from classical Newtonian fluids are a strongly shear-dependent viscosity and finite stresses normal to the direction of shear in rectilinear flow, transients of the order of seconds for the buildup or relaxation of stresses following a change in shear rate, a finite phase angle between stress and shear rate in oscillatory shear, ratios of extensional to shear viscosities that are considerably greater than 3, and substantial extrudate swell on extrusion from a capillary or slit. These rheological characteristics of molten polymers have been reviewed in textbooks (e.g. Larson 1999, Macosko 1994); the recent research emphasis in rheology has been to establish meaningful constitutive models that incorporate chain behavior at a molecular level. All polymer melts and concentrated solutions exhibit instabilities during extrusion when the stresses to which they are subjected become sufficiently high. The first manifestation of extrusion instability is usually the appearance of distortions on the extrudate surface, sometimes accompanied by oscillating flow. Gross distortion of the extrudate usually follows. The sequence of extrudate distortions",
"title": ""
},
{
"docid": "b57b392e89b92aecb03235eeaaf248c8",
"text": "Recent advances in semiconductor performance made possible by organic π-electron molecules, carbon-based nanomaterials, and metal oxides have been a central scientific and technological research focus over the past decade in the quest for flexible and transparent electronic products. However, advances in semiconductor materials require corresponding advances in compatible gate dielectric materials, which must exhibit excellent electrical properties such as large capacitance, high breakdown strength, low leakage current density, and mechanical flexibility on arbitrary substrates. Historically, conventional silicon dioxide (SiO2) has dominated electronics as the preferred gate dielectric material in complementary metal oxide semiconductor (CMOS) integrated transistor circuitry. However, it does not satisfy many of the performance requirements for the aforementioned semiconductors due to its relatively low dielectric constant and intransigent processability. High-k inorganics such as hafnium dioxide (HfO2) or zirconium dioxide (ZrO2) offer some increases in performance, but scientists have great difficulty depositing these materials as smooth films at temperatures compatible with flexible plastic substrates. While various organic polymers are accessible via chemical synthesis and readily form films from solution, they typically exhibit low capacitances, and the corresponding transistors operate at unacceptably high voltages. More recently, researchers have combined the favorable properties of high-k metal oxides and π-electron organics to form processable, structurally well-defined, and robust self-assembled multilayer nanodielectrics, which enable high-performance transistors with a wide variety of unconventional semiconductors. In this Account, we review recent advances in organic-inorganic hybrid gate dielectrics, fabricated by multilayer self-assembly, and their remarkable synergy with unconventional semiconductors. We first discuss the principals and functional importance of gate dielectric materials in thin-film transistor (TFT) operation. Next, we describe the design, fabrication, properties, and applications of solution-deposited multilayer organic-inorganic hybrid gate dielectrics, using self-assembly techniques, which provide bonding between the organic and inorganic layers. Finally, we discuss approaches for preparing analogous hybrid multilayers by vapor-phase growth and discuss the properties of these materials.",
"title": ""
},
{
"docid": "3e60194e452e0e7a478d7c5f563eaa13",
"text": "The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information-searching process of online searchers. This understanding can enlighten information system design, interface development, and devising the information architecture for content collections. This article presents a review and foundation for conducting Web search transaction log analysis. A methodology is outlined consisting of three stages, which are collection, preparation, and analysis. The three stages of the methodology are presented in detail with discussions of goals, metrics, and processes at each stage. Critical terms in transaction log analysis for Web searching are defined. The strengths and limitations of transaction log analysis as a research method are presented. An application to log client-side interactions that supplements transaction logs is reported on, and the application is made available for use by the research community. Suggestions are provided on ways to leverage the strengths of, while addressing the limitations of, transaction log analysis for Web-searching research. Finally, a complete flat text transaction log from a commercial search engine is available as supplementary material with this manuscript. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "131862b294936c95b8dba851b38c86fa",
"text": "In this paper, we revisit the Lagrangian accumulation process that aggregates the local attribute information along integral curves for vector field visualization. Similar to the previous work, we adopt the notation of the Lagrangian accumulation field or A field for the representation of the accumulation results. In contrast to the previous work, we provide a more in-depth discussion on the properties of A fields and the meaning of the patterns exhibiting in A fields. In particular, we revisit the discontinuity in the A fields and provide a thorough explanation of its relation to the flow structure and the additional information of the flow that it may reveal. In addition, other remaining questions about the A field, such as its sensitivity to the selection of integration time, are also addressed. Based on these new insights, we demonstrate a number of enhanced flow visualizations aided by the accumulation framework and the A fields, including a new A field guided ribbon placement, a A field guided stream surface seeding and the visualization of particle-based flow data. To further demonstrate the generality of the accumulation framework, we extend it to the non-integral geometric curves (i.e. streak lines), which enables us to reveal information of the flow behavior other than those revealed by the integral curves. Finally, we introduce the Eulerian accumulation, which can reveal different flow behavior information from those revealed by the Lagrangian accumulation. In summary, we believe the Lagrangian accumulation and the resulting A fields offer a valuable way for the exploration of flow behaviors in addition to the current state-of-the-art techniques. c © 2017 Elsevier B. V. All rights reserved.",
"title": ""
},
{
"docid": "7b6775a595cf843eac0b30ad850f8c32",
"text": "The main objectives of the study were: to investigate whether training on working memory (WM) could improve fluid intelligence, and to investigate the effects WM training had on neuroelectric (electroencephalography - EEG) and hemodynamic (near-infrared spectroscopy - NIRS) patterns of brain activity. In a parallel group experimental design, respondents of the working memory group after 30 h of training significantly increased performance on all tests of fluid intelligence. By contrast, respondents of the active control group (participating in a 30-h communication training course) showed no improvements in performance. The influence of WM training on patterns of neuroelectric brain activity was most pronounced in the theta and alpha bands. Theta and lower-1 alpha band synchronization was accompanied by increased lower-2 and upper alpha desynchronization. The hemodynamic patterns of brain activity after the training changed from higher right hemispheric activation to a balanced activity of both frontal areas. The neuroelectric as well as hemodynamic patterns of brain activity suggest that the training influenced WM maintenance functions as well as processes directed by the central executive. The changes in upper alpha band desynchronization could further indicate that processes related to long term memory were also influenced.",
"title": ""
},
{
"docid": "d9536cfdb4fec42e8b53a808db938b9b",
"text": "The main objective of this research is to give a cognitive support to a person for understanding Hindi language text using automatic text visualization (ATV). ATV is especially useful for the persons with learning disabilities (LD). This paper focuses on the background and complexity of the problem. The impact on comprehension through visualized text over reading the text is discussed. The architecture of Preksha — a Hindi text visulizer — is given and several illustrative examples of scenes generated using it are given.",
"title": ""
},
{
"docid": "ddae1c6469769c2c7e683bfbc223ad1a",
"text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"title": ""
}
] |
scidocsrr
|
953cda6b915f85557eda2d90c7587ab2
|
Data Curation with Deep Learning [Vision]: Towards Self Driving Data Curation
|
[
{
"docid": "f100d6a8d4adfe58b465a60630edf563",
"text": "Missing data is a significant problem impacting all domains. State-of-the-art framework for minimizing missing data bias is multiple imputation, for which the choice of an imputation model remains nontrivial. We propose a multiple imputation model based on overcomplete deep denoising autoencoders. Our proposed model is capable of handling different data types, missingness patterns, missingness proportions and distributions. Evaluation on several real life datasets show our proposed model significantly outperforms current state-of-the-art methods under varying conditions while simultaneously improving end of the line analytics.",
"title": ""
},
{
"docid": "c6abeae6e9287f04b472595a47e974ad",
"text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
}
] |
[
{
"docid": "83c0e0c81a809314e93471e9bcd6aabe",
"text": "A rail-to-rail amplifier with an offset cancellation, which is suitable for high color depth and high-resolution liquid crystal display (LCD) drivers, is proposed. The amplifier incorporates dual complementary differential pairs, which are classified as main and auxiliary transconductance amplifiers, to obtain a full input voltage swing and an offset canceling capability. Both offset voltage and injection-induced error, due to the device mismatch and charge injection, respectively, are greatly reduced. The offset cancellation and charge conservation, which is used to reduce the dynamic power consumption, are operated during the same time slot so that the driving period does not need to increase. An experimental prototype amplifier is implemented with 0.35m CMOS technology. The circuit draws 7.5 A static current and exhibits the settling time of 3 s, for a voltage swing of 5 V under a 3.4 k resistance, and a 140 pF capacitance load with a power supply of 5 V. The offset voltage of the amplifier with offset cancellation is 0.48 mV.",
"title": ""
},
{
"docid": "8b57c1f4c865c0a414b2e919d19959ce",
"text": "A microstrip HPF with sharp attenuation by using cross-coupling is proposed in this paper. The HPF consists of parallel plate- and gap type- capacitors and inductor lines. The one block of the HPF has two sections of a constant K filter in the bridge T configuration. Thus the one block HPF is first coarsely designed and the performance is optimized by circuit simulator. With the gap capacitor adjusted the proposed HPF illustrates the sharp attenuation characteristics near the cut-off frequency made by cross-coupling between the inductor lines. In order to improve the stopband performance, the cascaded two block HPF is examined. Its measured results show the good agreement with the simulated ones giving the sharper attenuation slope.",
"title": ""
},
{
"docid": "e8f424ee75011e7cf9c2c3cbf5ea5037",
"text": "BACKGROUND\nEmotional distress is an increasing public health problem and Hatha yoga has been claimed to induce stress reduction and empowerment in practicing subjects. We aimed to evaluate potential effects of Iyengar Hatha yoga on perceived stress and associated psychological outcomes in mentally distressed women.\n\n\nMATERIAL/METHODS\nA controlled prospective non-randomized study was conducted in 24 self-referred female subjects (mean age 37.9+/-7.3 years) who perceived themselves as emotionally distressed. Subjects were offered participation in one of two subsequential 3-months yoga programs. Group 1 (n=16) participated in the first class, group 2 (n=8) served as a waiting list control. During the yoga course, subjects attended two-weekly 90-min Iyengar yoga classes. Outcome was assessed on entry and after 3 months by Cohen Perceived Stress Scale, State-Trait Anxiety Inventory, Profile of Mood States, CESD-Depression Scale, Bf-S/Bf-S' Well-Being Scales, Freiburg Complaint List and ratings of physical well-being. Salivary cortisol levels were measured before and after an evening yoga class in a second sample.\n\n\nRESULTS\nCompared to waiting-list, women who participated in the yoga-training demonstrated pronounced and significant improvements in perceived stress (P<0.02), State and Trait Anxiety (P<0.02 and P<0.01, respectively), well-being (P<0.01), vigor (P<0.02), fatigue (P<0.02) and depression (P<0.05). Physical well-being also increased (P<0.01), and those subjects suffering from headache or back pain reported marked pain relief. Salivary cortisol decreased significantly after participation in a yoga class (P<0.05).\n\n\nCONCLUSIONS\nWomen suffering from mental distress participating in a 3-month Iyengar yoga class show significant improvements on measures of stress and psychological outcomes. Further investigation of yoga with respect to prevention and treatment of stress-related disease and of underlying mechanism is warranted.",
"title": ""
},
{
"docid": "ceb270c07d26caec5bc20e7117690f9f",
"text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].",
"title": ""
},
{
"docid": "ce08b02ae03c8496e051c3443874de8f",
"text": "The goal was to determine the utility and accuracy of automated analysis of single-lead electrocardiogram (ECG) data using two algorithms, cardiopulmonary coupling (CPC), and cyclic variation of heart rate (CVHR) to identify sleep apnea (SA). The CPC-CVHR algorithms were applied to identify SA by analyzing ECG from diagnostic polysomnography (PSG) from 47 subjects. The studies were rescored according to updated AASM scoring rules, both manually by a certified technologist and using an FDA-approved automated scoring software, Somnolyzer (Philips Inc., Monroeville, PA). The CPC+CVHR output of Sleep Quality Index (SQI), Sleep Apnea Indicator (SAI), elevated low frequency coupling broadband (eLFCBB) and elevated low frequency coupling narrow-band (eLFCNB) were compared to the manual and automated scoring of apnea hypopnea index (AHI). A high degree of agreement was noted between the CPC-CVHR against both the manually rescored AHI and the computerized scored AHI to identify patients with moderate and severe sleep apnea (AHI > 15). The combined CPC+CVHR algorithms, when compared to the manually scored PSG output presents sensitivity 89%, specificity 79%, agreement 85%, PPV (positive predictive value) 0.86 and NPV (negative predictive value) 0.83, and substantial Kappa 0.70. Comparing the output of the automated scoring software to the manual scoring demonstrated sensitivity 93%, specificity 79%, agreement 87%, PPV 0.87, NPV 0.88, and substantial Kappa 0.74. The CPC+CVHR technology performed as accurately as the automated scoring software to identify patients with moderate to severe SA, demonstrating a clinically powerful tool that can be implemented in various clinical settings to identify patients at risk for SA. NCT01234077.",
"title": ""
},
{
"docid": "8e2bc8c050ebaeb295f74f9e405ed280",
"text": "Multi-modal semantics has relied on feature norms or raw image data for perceptual input. In this paper we examine grounding semantic representations in raw auditory data, using standard evaluations for multi-modal semantics, including measuring conceptual similarity and relatedness. We also evaluate cross-modal mappings, through a zero-shot learning task mapping between linguistic and auditory modalities. In addition, we evaluate multimodal representations on an unsupervised musical instrument clustering task. To our knowledge, this is the first work to combine linguistic and auditory information into multi-modal representations.",
"title": ""
},
{
"docid": "b6d707d7f4141bd4a298a27bd5f20449",
"text": "In this paper we consider active noise control (ANC) of impulsive noise having peaky distribution with heavy tail. Such impulsive noise can be modeled using non-Gaussian stable process for which second order moments do not exist. The most famous filtered-x least mean square (FxLMS) algorithm for ANC systems is based on second order moment of error signal, and hence, becomes unstable for the impulsive noise. Recently we have proposed variants of the FxLMS algorithm where improved performance has been realized either by thresholding the input data or efficiently normalizing the step-size for adaptation. In the practical ANC systems, these thresholding parameters need to be estimated offline and cannot be updated during online operation of ANC systems. Furthermore, normalizing the steps-size for an impulsive noise source would essentially freeze the adaptation for very large impulses. In order to solve these problems, in this paper we propose a novel approach for ANC of impulsive noise sources. The proposed approach is based on data-reusing (DR) type adaptive algorithm. The main idea is to improve the stability by normalizing the step-size, and improve the convergence speed by reusing the data. The computer simulations are carried out to verify the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "b34dbcd4a852e55b698df76d73afe0e9",
"text": "We present a new method for automatically detecting circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space is restricted to one and two dimensions for efficiency, and special weighting schemes are introduced to enhance the accuracy. We demonstrate the effectiveness of our method using real images. Finally, we apply our method to the calibration of a turntable for 3-D object shape reconstruction.",
"title": ""
},
{
"docid": "f3a49052d58bb266fa45c348ad47b549",
"text": "Deep learning models based on CNNs are predominantly used in image classification tasks. Such approaches, assuming independence of object categories, normally use a CNN as a feature learner and apply a flat classifier on top of it. Object classes in many settings have hierarchical relations, and classifiers exploiting these relations should perform better. We propose hierarchical classification models combining a CNN to extract hierarchical representations of images, and an RNN or sequence-to-sequence model to capture a hierarchical tree of classes. In addition, we apply residual learning to the RNN part in oder to facilitate training our compound model and improve generalization of the model. Experimental results on a real world proprietary dataset of images show that our hierarchical networks perform better than state-of-the-art CNNs.",
"title": ""
},
{
"docid": "3473417f1701c82a4a06c00545437a3c",
"text": "The eXtensible Markup Language (XML) and related technologies offer promise for (among other things) applying data management technology to documents, and also for providing a neutral syntax for interoperability among disparate systems. But like many new technologies, it has raised unrealistic expectations. We give an overview of XML and related standards, and offer opinions to help separate vaporware (with a chance of solidifying) from hype. In some areas, XML technologies may offer revolutionary improvements, such as in processing databases' outputs and extending data management to semi-structured data. For some goals, either a new class of DBMSs is required, or new standards must be built. For such tasks, progress will occur, but may be measured in ordinary years rather than Web time. For hierarchical formatted messages that do not need maximum compression (e.g., many military messages), XML may have considerable benefit. For interoperability among enterprise systems, XML's impact may be moderate as an improved basis for software, but great in generating enthusiasm for standardizing concepts and schemas.",
"title": ""
},
{
"docid": "ccc6651b9bf4fcaa905d8e1bc7f9b6b4",
"text": "We introduce computational network (CN), a unified framework for describing arbitrary learning machines, such as deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short term memory (LSTM), logistic regression, and maximum entropy model, that can be illustrated as a series of computational steps. A CN is a directed graph in which each leaf node represents an input value or a parameter and each non-leaf node represents a matrix operation upon its children. We describe algorithms to carry out forward computation and gradient calculation in CN and introduce most popular computation node types used in a typical CN. We further introduce the computational network toolkit (CNTK), an implementation of CN that supports both GPU and CPU. We describe the architecture and the key components of the CNTK, the command line options to use CNTK, and the network definition and model editing language, and provide sample setups for acoustic model, language model, and spoken language understanding. We also describe the Argon speech recognition decoder as an example to integrate with CNTK.",
"title": ""
},
{
"docid": "6f9bca88fbb59e204dd8d4ae2548bd2d",
"text": "As the biomechanical literature concerning softball pitching is evolving, there are no data to support the mechanics of softball position players. Pitching literature supports the whole kinetic chain approach including the lower extremity in proper throwing mechanics. The purpose of this project was to examine the gluteal muscle group activation patterns and their relationship with shoulder and elbow kinematics and kinetics during the overhead throwing motion of softball position players. Eighteen Division I National Collegiate Athletic Association softball players (19.2 ± 1.0 years; 68.9 ± 8.7 kg; 168.6 ± 6.6 cm) who were listed on the active playing roster volunteered. Electromyographic, kinematic, and kinetic data were collected while players caught a simulated hit or pitched ball and perform their position throw. Pearson correlation revealed a significant negative correlation between non-throwing gluteus maximus during the phase of maximum external rotation to maximum internal rotation (MIR) and elbow moments at ball release (r = −0.52). While at ball release, trunk flexion and rotation both had a positive relationship with shoulder moments at MIR (r = 0.69, r = 0.82, respectively) suggesting that the kinematic actions of the pelvis and trunk are strongly related to the actions of the shoulder during throwing.",
"title": ""
},
{
"docid": "d003deabc7748959e8c5cc220b243e70",
"text": "INTRODUCTION In Britain today, children by the age of 10 years have regular access to an average of five different screens at home. In addition to the main family television, for example, many very young children have their own bedroom TV along with portable handheld computer game consoles (eg, Nintendo, Playstation, Xbox), smartphone with games, internet and video, a family computer and a laptop and/or a tablet computer (eg, iPad). Children routinely engage in two or more forms of screen viewing at the same time, such as TV and laptop. Viewing is starting earlier in life. Nearly one in three American infants has a TV in their bedroom, and almost half of all infants watch TV or DVDs for nearly 2 h/day. Across the industrialised world, watching screen media is the main pastime of children. Over the course of childhood, children spend more time watching TV than they spend in school. When including computer games, internet and DVDs, by the age of seven years, a child born today will have spent one full year of 24 h days watching screen media. By the age of 18 years, the average European child will have spent 3 years of 24 h days watching screen media; at this rate, by the age of 80 years, they will have spent 17.6 years glued to media screens. Yet, irrespective of the content or educational value of what is being viewed, the sheer amount of average daily screen time (ST) during discretionary hours after school is increasingly being considered an independent risk factor for disease, and is recognised as such by other governments and medical bodies but not, however, in Britain or in most of the EU. To date, views of the British and European medical establishments on increasingly high levels of child ST remain conspicuous by their absence. This paper will highlight the dramatic increase in the time children today spend watching screen media. It will provide a brief overview of some specific health and well-being concerns of current viewing levels, explain why screen viewing is distinct from other forms of sedentary behaviour, and point to the potential public health benefits of a reduction in ST. It is proposed that Britain and Europe’s medical establishments now offer guidance on the average number of hours per day children spend viewing screen media, and the age at which they start.",
"title": ""
},
{
"docid": "908716e7683bdc78283600f63bd3a1b0",
"text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.",
"title": ""
},
{
"docid": "e682f1b64d6eae69252ea2298f035ac6",
"text": "Objective\nPatient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of electronic health record databases, the limited number of researchers with access to non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value.\n\n\nMaterials and Methods\nWe introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset.\n\n\nResults\nOur ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall of 97.38 and a precision of 98.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall of 99.25 and a precision of 99.21.\n\n\nConclusion\nOur findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no manual feature engineering.",
"title": ""
},
{
"docid": "e2649203ae3e8648c8ec1eafb7a19d6e",
"text": "This paper describes an algorithm to extract adaptive and quality quadrilateral/hexahedral meshes directly from volumetric data. First, a bottom-up surface topology preserving octree-based algorithm is applied to select a starting octree level. Then the dual contouring method is used to extract a preliminary uniform quad/hex mesh, which is decomposed into finer quads/hexes adaptively without introducing any hanging nodes. The positions of all boundary vertices are recalculated to approximate the boundary surface more accurately. Mesh adaptivity can be controlled by a feature sensitive error function, the regions that users are interested in, or finite element calculation results. Finally, a relaxation based technique is deployed to improve mesh quality. Several demonstration examples are provided from a wide variety of application domains. Some extracted meshes have been extensively used in finite element simulations.",
"title": ""
},
{
"docid": "a0358cfc6166fbd45d35cbb346c56b7a",
"text": "a Pontificia Universidad Católica de Valparaíso, Av. Brasil 2950, Valparaíso, Chile b Universidad Autónoma de Chile, Av. Pedro de Valdivia 641, Santiago, Chile c Universidad Finis Terrae, Av. Pedro de Valdivia 1509, Santiago, Chile d CNRS, LINA, University of Nantes, 2 rue de la Houssinière, Nantes, France e Escuela de Ingeniería Industrial, Universidad Diego Portales, Manuel Rodríguez Sur 415, Santiago, Chile",
"title": ""
},
{
"docid": "7b6c039783091260cee03704ce9748d8",
"text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.",
"title": ""
},
{
"docid": "41851f7a7153b3cd9a27fc8c42e8152d",
"text": "SUTWVIXYVZXL[ \\]T_^L[ `baLc![ dA^7eL`![;fhgig ekjWjWl8VZ[ZmBVnT_okg fAc!pqd/VbT_f1o7c![&l VZcrT_[&s/dhj t$aLdA`!`bd1uA[&l cb[&VZcrT_[&s/d/jvVZ[ wIXLokT_x eL[ ` dAc![;y [ w f1pzT_oLu;T_okl w c![ dA`{T_o u/jW|}a fAaLekj_dAcH~ dAc!uA[ cDVI[ZmkVI`>w d1onVZXL[ ony [qc![ akj_dAw [ ^ yA| T_p6a f1crVZdAo1V#VI[ZmkVD[Zm w [ cbakVZ`&tLVIXL[ c![ yA|<`{T_p6aBjWTWg| T_oLuiVIXL[>c![&l VZcrT_[&s/dhj/VZdA`!]dAoL^vT_p6aLcbf s T_oLu#c![&VIc{T_[&s/dhj/[& [ w&VbTWs1[ oL[ `b`H~@ d1`b`!dAu1[&l j_[&s1[&j2[&s T_^L[ oLw [>d1y fAekVDVZX [$eL`![$fhg\\#fAc!^L`
T_oj_fkw dhj%w f1oAVZ[ZmBVI` T_`#d/j_`bf>eL`![&g8ekjkg fAccb[ `!fhjWs T_oLuj_d1oLuAeLdAu1[#dAp$ykT_uAekTWV!T_[ `Kd1oL^zT_pil aLcbf s T_oLuic![&VIc{T_[&s/dhj fAekVIaLekV~ \\#fip6dhT_o6VI[ZmBV
^ [ w f1pqa fA`{TWVbT_f1oi`rVZcbd/VI[ u/T_[ `Kd1cb[DT_oAVZcbfk^LeLw [ ^ T_o;VZXBT_`i`{VIeL^k|/t2T_oLw&j_eL^BT_oLu}dnwZXLcbf1o f/j_fAu/T_w dhj^ [ w f1pqa fA`{TWVbT_f1o T_oAVZf$(!13 /qZ &tkdAoL^>`![ p6dAoAV!T_wK^L[ w f1pqa fA`{TWVbT_f1oNT_o1VIfz !1 LZq!&~ XL[KT_o1VI[ c!dAw&V!T_fAoy [&V\\#[ [ oVI[ZmkV%`b[ u1pq[ oAVZ`@d1o ^VI[ZmBV VZX [ p6[ `2T_`DVIXL[ o eL`![ ^ VIf6wIXLdAc!dAw&VZ[ c{T_ [NVZ[ZmBV`{VIc!eLw&VIeLc![/tBd1o ^ VZfNg fAc!p$ekj_dhVZ[%`ba [ w&TW3w d/VbT_f1oL` g fAc T_okg fAc!pqd/VbT_f1ovc![&VZcrT_[&s/dhj t VI[ZmBV VZcbd s1[ cb`!dhj tLdAoL^<VZ[ZmBV]`!e p6p6dAc{T_ dhV!T_fAo ~ KEYWORDS: [ZmBV `{VZcbeLw&VZe c{T_oLuktIVZ[ZmBV@^L[ w fAp6a f1`rTWV!T_fAo t&`![ uhl p6[ oAVI`Ht3VZX [ p6[ `HtBT_okg8f1cbp6dhV!T_fAo*cb[&VZcrT_[&s/d/j t a d1`b`!dAu1[>c![&VIc{T_[&s/dhj t VZ[ZmBV]`!e p6p6dAc{T_ dhV!T_fAo ~ TEXT PASSAGES AND TEXT RELATIONSHIP MAPS SUTWVIX}VIXL[6dA^ksA[ o1V$fhg#g ekjWjWl8VZ[ZmBVi^Lfkw eLpq[ oAVaLc!fkw [ `b`{T_oLukt@VIXL[ T_oAVZ[ cb[ `{V#T_o p6dAokT_aLekj_dhV!T_o uVI[ZmBV
a d1`b`!dAu1[ `#c!dhVZX [ c VIXLd1o fAokjW| g eBjWjWl VI[ZmkVvTWVZ[ pq`DXLdA`w fAo1VbT_o eL[ ^ VZfqu1cbf \\~>D[&VIc{T_[&s T_oLuij_dAc!uA[ VZ[ZmBVI`$T_odAoL`{\\#[ c$VZfneL`![ czxBe [ c{T_[ `$VI[ oL^L`>VZf y [6T_oL[&qw&T_[ o1V y [ w d1eL`b[ VZX [eL`b[ c@T_` VZX [ og8f1cbw [ ^VIfDw f1a [%\\]TWVIX>j_d1cbu1[2p6dA`!`b[ ` fhgDVZ[ZmBVt2dAoL^;T_oL[& [ w&V!TWsA[*y [ w dAeL`![qc![&j_[&s/dAoAV$VZ[ZmBV6aLdA`!`bd1uA[ ` fhgVZ[ o aLcbf s T_^L[ y [&VbVZ[ cd1oL`r\\N[ cb`VZXLdAo;w f1pqakj_[&VZ[i^ fkw eLp6[ oAV VZ[ZmBVI`H~F{o$d1^L^BTWV!T_fAo tAaLdA`!`!dAuA[&l j_[&sA[&jL[&sBT_^ [ oLw [dAw w fAeLoAV!T_oLu]g8f1c \\#f1cb^ieL`!dAuA[T_ozj_fkw d/jkVI[ZmBV[ oAs T_c!fAoLp6[ oAVZ`@T_`2fhgVZ[ o$X [&j_akg ekjkT_o T_p6a c!f s T_oLu]cb[&VZc{T_[&shd/j/[& [ w&V!TWsA[ oL[ `!`&t y [ w d1eL`b[@VZXL[Kp6[ dAokT_oLu2f/g dAp$ykT_u1e f1eL`#VI[ c!pq`Ny [ w fAp6[ `2w&j_[ d1c]\\DX [ oEVIXL[vj_fkw d/j w fAo1VI[ZmBV T_`DaLcbf1a [ c{jW|i`ba [ w&TW3[ ^ ~ WZl h T_oLw [zg ekjWj#VI[ZmkVI`>d1cb[6oL[ w [ `b`!dAc{TWjW|}w fAp6a f1`b[ ^ f/g]T_o ^kTWs T_^LeLdhj ¡I¢@£ ¤¦¥D¥8§ ̈ ©Ha6«@¬H¥
¥ ̈ ® ̄°§ ±!©q¤¦2*¬H ̄°§3&aq§ £ ±i ́@¬I§8¤μ® 2 ¬I¶ · ̧¤ ±!2 ̧ ± 1 ® ̈ 2© ¬I§ ¤ ® 2D ̈ 2 © ±b ̄ o ̄8¬&2&§ »(1⁄4 »F1⁄2&3⁄4&¿&¿hÀbÁIÂ/à VI[ZmBVaLdA`!`bd1uA[ `HtAd`{VIeL^k|$f/g3VZ[ZmBVa d1`b`!dAu1[ ` T_`2dhj_`!fvT_p6a f1crVZdAo1V g8f1c
^L[&VZ[ cbpiT_okT_oLu>f sA[ c!dhjWj VZ[ZmBV]`{VIc!eLw&VIeLc![/~KÄÅ`rVZc!e w&VZeLcbd/j ^ [&l w fAp6a fA`{TWV!T_fAoDfhgkVZ[ZmBVZ` T_oAVZf
aLd1`b`!dAu1[ ` p6dH|VZX [ oc![&sA[ d/jAT_okg8f1crl pqd/VbT_f1oqd1y fAekVVIXL[]V|ka [fhg VZ[ZmBVNe oL^L[ c2w f1oL`rT_^L[ c!dhV!T_fAo t dAoL^ oLfH\\]j_[ ^Lu1[#fhg VI[ZmkV V| a [2d1oL^>VI[ZmBV `{VIc!eLw&VIeLc![ T_o$VZeLcbo$d/ [ w&VZ` pqd1oA|iVI[ZmBVXLdAoL^kjWT_oLu>fAa [ cbd/VbT_f1o `HthT_oLw&j_eL^BT_oLu>c![&VZcrT_[&s/dhj tAVZ[ZmBV cb[ d1^BT_oLu$d1o ^EVIc!d sA[ c!`bd/j t d1o ^EVI[ZmBV
`beLp6pqd1crT_ d/VbT_f1o ~ X [`rVZc!e w&VZeLcb[ f/gkT_oL^BTWsBT_^ eLd/j VI[ZmkVI`Htf1c `b[&VZ`@fhgLc![&j_dhVZ[ ^DVI[ZmBVZ`Ht w dAoÆy [*`rVZeL^BT_[ ^Æy1|ÆeL`rT_oLu}dnVZ[ZmBV6cb[&j_d/VbT_f1o `!XkT_a;p6dAa;VZX d/V [Zm XkT_yBTWVZ`#VZX [Dc![ `bekjWVZ`2f/g%`rT_piTWj_dAc{TWV|<p6[ dA`!eLcb[ p6[&oAVZ`%y [&V\\N[ [ o a d/T_cb`Df/g VI[ZmBVZ`Ht fAc]VZ[ZmBV][Zm w [ c!akVI`H~ | aBT_w d/jWjW|/t [ dAwZX<VZ[ZmBV t fAc VI[ZmBV%[ZmLw [ cbakV@T_`%c![ a c![ `![ oAVI[ ^yA|,d
s1[ w&VIf1c fhgk\\N[&T_uAXAVZ[ ^VZ[ cbp6` fhg]VZX [ig fAc!pÈÇiÉÊÈË8ÌkÉÍ&Î$ÌkÉÏ Î&ÐWÐWÐWÎ&ÐWÐWÐWÎ{ÌkÉÑ&Ò\\DXL[ cb[$ÌkÉÓ<c![ aLcb[&l `b[ oAVZ`%dAo>T_p6a f1crVZdAoLw [ \\#[&T_u1XAVKg fAc VZ[ cbpYÔFÕDdhV!VId1wIXL[ ^vVZfv^Lfkw&l e p6[ oAV,Ç É ~ XL[iVI[ c!p6`vd/VbVZdAwZX [ ^ VZf ^Lfkw eLpq[ oAVZ`]g8f1c>w fAokl VI[ oAVzcb[ aLc![ `b[ oAVZdhV!T_fAo aLeLc!a fA`![ `p6d |}y [i\\#fAc!^L`>fAc,a XLc!dA`![ ` ^ [ c{TWs1[ ^;g8c!fApÖVZXL[<^Lfkw eLpq[ oAVVI[ZmkVI`$yA|×dAo;d1eBVZfAp6dhV!T_w$T_okl ^ [ZmkT_o u aLc!fkw [ ^Le c![/t dAoL^}VIXL[$VI[ c!pØ\\#[&T_uAX1VI`>d1cb[iw f1pqaLekVI[ ^ yA|<VZdAAT_oLu>T_oAVZfidAw w f1e o1V2VZX [fkw w eLcbc![ oLw [#wIXLd1cbd1w&VI[ c{T_`rV!T_w `%fhg VIXL[%VI[ c!pq`LT_oVZXL[FT_oL^kTWs T_^LeLdhjB^ fkw eLp6[ oAVZ` d1oL^vVZX [%^Lfkw eLpq[ o1V w fhjWj_[ w&V!T_fAo<d1`]dz\\DXLf/j_[/~@ Ù Äv`!`beLpiT_oLu
VZXLdhV [&sA[ c{|>VZ[ZmBVt1f1c VI[ZmBV[Zm w [ c!akV T_`2c![ a c![ `![ oAVI[ ^ T_o*s1[ w&VZfAc#g fAc!pÚdA`Dd<`![&VDfhg%\\#[&T_u1XAVZ[ ^ VI[ c!pq`HtBTWVT_`Da f1`b`{T_ykj_[ VIf w f1pqaLekVZ[aLdhT_c{\\]T_`b[$`{T_piTWj_dAc{TWV|nw fk[&6w&T_[ oAVZ`]`!X f \\]T_oLu<VZXL[ `rT_piTWj_dAc{TWV|y [&V\\N[ [ o a d/T_cb`fhgVI[ZmkVI`DyLdA`![ ^ fAo}w fhT_oLw&T_^L[ oLw [ ` T_onVZX [$VZ[ cbpÛdA`!`{T_uAoLpq[ o1VI`]VZf<VIXL[$c![ `ba [ w&VbTWs1[>TWVZ[ pq`H~ |kakl T_w dhjWjW|/t2VZX [6sA[ w&VZfAc,`rT_piTWj_dAc{TWV| piT_uAX1V$y [6w fAp6aLeBVZ[ ^dA`>VZXL[ T_o oL[ ciaLcbfk^LeLw&V$y [&V\\#[ [ o;w f1cbc![ `ba fAoL^kT_oLu<sA[ w&VZfAc$[&j_[ p6[ oAVZ`&t VIXLdhVT_`Ht Ü&Ý(ÞßË(Ç É ÎbÇ$à ÒáÊ âäãÕ&å æ Ì É Ó,Ì àrÓ t dAoL^ VIXL[$`{T_pzTWl j_dAc{TWV|qg eLo w&V!T_fAo6piT_uAXAVNy [DoLfAc!pqd/jWT_ [ ^,VZf>jWT_[y [&V\\#[ [ o6ç>g fAc ^BT_` è{fhT_oAVDs1[ w&VIf1cb`#d1oL^vg8f1c]w fAp6akj_[&VI[&jW|$T_^L[ oAV!T_w dhj sA[ w&VZfAc!`&~ é T_u1eLcb[nq`bXLf \\D`$d V|kakT_w dhjVI[ZmBVzcb[&j_d/VbT_f1o `!XkT_anp6dAag fAc$`{TRm VI[ZmBVZ`vT_oLw&j_eL^L[ ^nT_o}VIXL[ é e oL}dAoL^nSêdAu1oLdhjWj_`>[ oLw&|kw&j_f1a [ ^kT_d ^ [ d/jWT_oLu<\\]TWVIXVZX [$u1[ oL[ cbd/j@VIf1akT_w$fhg#ëeLw&j_[ dAcì2oL[ c!uh|/~ XL[ ^ fkw eLp6[ oAVZ`>dAaLa [ d1c>dA`ioLfk^L[ `EËs1[ c{VbT_w [ `ZÒT_oÆVIXL[<u1cbd1aLX;fhg é T_u1eLcb[vhthd1o ^d]jWT_o qË yLc!dAoLwIX Ò dAaLa [ d1cb` y [&V\\N[ [ ovV\\#fDoLfk^L[ ` \\DX [ oVIXL[%`{T_pzTWj_d1crTWV|y [&V\\N[ [ o
V\\Nf#VI[ZmkVI` T_` `bek6w&T_[ oAV!jW|
j_d1cbu1[/~ X [$`{T_piTWj_dAc{TWV| VIXLc![ `bXLfhj_^ eL`![ ^ VIf6yLeBTWj_^VZX [$p6dAa f/g é T_u/l e c![
T_` çk~ çL/tHVZXLdhV T_`Hthd/jWj yLcbd1oLwIXL[ `3c![ aLcb[ `![ oAV!T_oLu2d#VZ[ZmBV@`{T_pzl TWj_dAc{TWV|idAy f s1[#çk~ ç32dAc![#`bXLf \\Do$fAo>VZXL[2p6dAa ~ é T_u1e c![D]`bXLf \\D` VIXLdhV$VZXL[q`{T_piTWj_dAc{TWV|;p6[ dA`!e c![iy [&V\\N[ [ o ^Lfkw eLpq[ oAVZ`i Ùhç3í dAoL^îÙhç3HäË ëeLw&j_[ dAc$ì2oL[ c!uh|?dAoL^ëveLw&j_[ dAc$Sê[ dAa fAoL`IÒ>T_` d<XkT_uAXçk~¦ï1Ù t3\\DXL[ c![ dA`]oLfq`{T_uAokTW w dAo1Vv`{T_piTWj_dAc{TWV| [ZmBT_`{VI`y [&l V\\#[ [ o>ð1ñAç ÙvË ëeLw&j_[ dAc é T_`!`rT_f1o Ò dAoL^$í1íhòAðBÙDË X [ c!p6fAo eLw&j_[ dAc é eL`{T_fAo ÒZ~ 22387--Thermonuclear Fusion 19199--Radioactive Fallout 17016--Nuclear Weapons 17012--Nuclear Energy 11830--Hydrogen Bomb 8907--Fission, Nuclear 0.33 0.38 0.57 0.54",
"title": ""
},
{
"docid": "ee54c02fb1856ccf4f11fe1778f0883c",
"text": "Failure Mode, Mechanism and Effect Analysis (FMMEA) is a reliability analysis method which is used to study possible failure modes, failure mechanisms of each component, and to identify the effects of various failure modes on the components and functions. This paper introduces how to implement FMMEA on the Single Board Computer in detail, including system definition, identification of potential failure modes, analysis of failure cause, failure mechanism, and failure effect analysis. Finite element analysis is carried out for the Single Board Computer, including thermal stress analysis and vibration stress analysis. Temperature distribution and vibration modes are obtained, which are the inputs of physics of failure models. Using a variety of Physics of Failure models, the quantitative calculation of single point failure for the Single Board Computer are carried out. Results showed that the time to failure (TTF) of random access memory chip which is SOP (small outline package) is the shortest and the failure is due to solder joint fatigue failure caused by the temperature cycle. It is the weak point of the entire circuit board. Thus solder joint thermal fatigue failure is the main failure mechanism of the Single Board Computer. In the implementation process of PHM for the Single Board Computer, the failure condition of this position should be monitored.",
"title": ""
}
] |
scidocsrr
|
1a182fd027ee425e82ab7da3fff690c4
|
A Dataset for Inter-Sentence Relation Extraction using Distant Supervision
|
[
{
"docid": "54c6e02234ce1c0f188dcd0d5ee4f04c",
"text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.",
"title": ""
}
] |
[
{
"docid": "e0ee4f306bb7539d408f606d3c036ac5",
"text": "Despite the growing popularity of mobile web browsing, the energy consumed by a phone browser while surfing the web is poorly understood. We present an infrastructure for measuring the precise energy used by a mobile browser to render web pages. We then measure the energy needed to render financial, e-commerce, email, blogging, news and social networking sites. Our tools are sufficiently precise to measure the energy needed to render individual web elements, such as cascade style sheets (CSS), Javascript, images, and plug-in objects. Our results show that for popular sites, downloading and parsing cascade style sheets and Javascript consumes a significant fraction of the total energy needed to render the page. Using the data we collected we make concrete recommendations on how to design web pages so as to minimize the energy needed to render the page. As an example, by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience. We conclude by estimating the point at which offloading browser computations to a remote proxy can save energy on the phone.",
"title": ""
},
{
"docid": "3dbd412669a560c3a2bb47da0f86d024",
"text": "Microgrids are defined as groups of energy resources, both renewable and/or conventional, and loads located and interconnected in a specific physical area that appear as a single entity to the alternating-current (ac) electric grid. The use of distributed resources to power local loads combined with the capability to operate independently of the ac grid makes microgrids a technically feasible option to address the concerns of sustainability, resilience, and energy efficiency. Furthermore, microgrids can operate while completely separated from the grid, representing a lower-cost option to provide electrical power to regions in developing countries where conventional ac grids are not available or are too unreliable. When connected to the ac grid, microgrids appear as controlled entities within the power system that, instead of being a burden to the ac grid power-management system, represent a resource capable of supporting the grid. Energy storage as the element responsible for balancing generation with load is critical to the success of the microgrid concept, and it is more important as larger penetration of renewable resources is present in the microgrid. Accelerated improvements in performance and cost of energy-storage technologies during the last five years are making microgrids an economically viable option for power systems in the very near future (see Figure 1).",
"title": ""
},
{
"docid": "0eadb0a63cc4c9a5799a8fbb7db28943",
"text": "Sentiment analysis seeks to characterize opinionated or evaluative aspects of natural language text. We suggest here that appraisal expression extraction should be viewed as a fundamental task in sentiment analysis. An appraisal expression is a textual unit expressing an evaluative stance towards some target. The task is to find and characterize the evaluative attributes of such elements. This paper describes a system for effectively extracting and disambiguating adjectival appraisal expressions in English outputting a generic representation in terms of their evaluative function in the text. Data mining on appraisal expressions gives meaningful and non-obvious insights.",
"title": ""
},
{
"docid": "98669391168e56c407b1dc3756348a00",
"text": "This study assessed the relation between non-native subjects' age of learning (AOL) English and the overall degree of perceived foreign accent in their production of English sentences. The 240 native Italian (NI) subjects examined had begun learning English in Canada between the ages of 2 and 23 yr, and had lived in Canada for an average of 32 yr. Native English-speaking listeners used a continuous scale to rate sentences spoken by the NI subjects and by subjects in a native English comparison group. Estimates of the AOL of onset of foreign accents varied across the ten listeners who rated the sentences, ranging from 3.1 to 11.6 yr (M = 7.4). Foreign accents were evident in sentences spoken by many NI subjects who had begun learning English long before what is traditionally considered to be the end of a critical period. Very few NI subjects who began learning English after the age of 15 yr received ratings that fell within the native English range. Principal components analyses of the NI subjects' responses to a language background questionnaire were followed by multiple-regression analyses. AOL accounted for an average of 59% of variance in the foreign accent ratings. Language use factors accounted for an additional 15% of variance. Gender was also found to influence degree of foreign accent.",
"title": ""
},
{
"docid": "1ec731b5c586596705053309729d8427",
"text": "In this work the design and application of a fuzzy logic controller to DC-servomotor is investigated. The proposed strategy is intended to improve the performance of the original control system by use of a fuzzy logic controller (FLC) as the motor load changes. Computer simulation demonstrates that FLC is effective in position control of a DC-servomotor comparing with conventional one.",
"title": ""
},
{
"docid": "81ddb19f421eb3ba4c31b7fad5ed8045",
"text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. Given a set of reviews, the main task of aspect-based opinion mining is to extract major aspects of the items and to infer the latent aspect ratings from each review. However, users may have different preferences which might lead to different opinions on the same aspect of an item. Even if fine-grained aspect rating analysis is provided for each review, it is still difficult for a user to judge whether a specific aspect of an item meets his own expectation. In this paper, we study the problem of estimating personalized sentiment polarities on different aspects of the items. We propose a unified probabilistic model called Factorized Latent Aspect ModEl (FLAME), which combines the advantages of collaborative filtering and aspect based opinion mining. FLAME learns users' personalized preferences on different aspects from their past reviews, and predicts users' aspect ratings on new items by collective intelligence. Experiments on two online review datasets show that FLAME outperforms state-of-the-art methods on the tasks of aspect identification and aspect rating prediction.",
"title": ""
},
{
"docid": "7acfd4b984ea4ce59f95221463c02551",
"text": "An autopilot system includes several modules, and the software architecture has a variety of programs. As we all know, it is necessary that there exists one brand with a compatible sensor system till now, owing to complexity and variety of sensors before. In this paper, we apply (Robot Operating System) ROS-based distributed architecture. Deep learning methods also adopted by perception modules. Experimental results demonstrate that the system can reduce the dependence on the hardware effectively, and the sensor involved is convenient to achieve well the expected functionalities. The system adapts well to some specific driving scenes, relatively fixed and simple driving environment, such as the inner factories, bus lines, parks, highways, etc. This paper presents the case study of autopilot system based on ROS and deep learning, especially convolution neural network (CNN), from the perspective of system implementation. And we also introduce the algorithm and realization process including the core module of perception, decision, control and system management emphatically.",
"title": ""
},
{
"docid": "4c21ec3a600d773ea16ce6c45df8fe9d",
"text": "The efficacy of particle identification is compared using artificial neutral networks and boosted decision trees. The comparison is performed in the context of the MiniBooNE, an experiment at Fermilab searching for neutrino oscillations. Based on studies of Monte Carlo samples of simulated data, particle identification with boosting algorithms has better performance than that with artificial neural networks for the MiniBooNE experiment. Although the tests in this paper were for one experiment, it is expected that boosting algorithms will find wide application in physics. r 2005 Elsevier B.V. All rights reserved. PACS: 29.85.+c; 02.70.Uu; 07.05.Mh; 14.60.Pq",
"title": ""
},
{
"docid": "d022a755229f5799e0811601e35e562c",
"text": "The use of orthopedic implants in joints has revolutionized the treatment of patients with many debilitating chronic musculoskeletal diseases such as osteoarthritis. However, the introduction of foreign material into the human body predisposes the body to infection. The treatment of these infections has become very complicated since the orthopedic implants serve as a surface for multiple species of bacteria to grow at a time into a resistant biofilm layer. This biofilm layer serves as a protectant for the bacterial colonies on the implant making them more resistant and difficult to eradicate when using standard antibiotic treatment. In some cases, the use of antibiotics alone has even made the bacteria more resistant to treatment. Thus, there has been surge in the creation of non-antibiotic anti-biofilm agents to help disrupt the biofilms on the orthopedic implants to help eliminate the infections. In this study, we discuss infections of orthopedic implants in the shoulder then we review the main categories of anti-biofilm agents that have been used for the treatment of infections on orthopedic implants. Then, we introduce some of the newer biofilm disrupting technology that has been studied in the past few years that may advance the treatment options for orthopedic implants in the future.",
"title": ""
},
{
"docid": "992bc1162eb7fc179bfb41ce0f6e0911",
"text": "A new inspection robot system for live-line suspension insulator strings was developed to prevent an insulator failure in 345-kV power transmission lines. Compared with the existing inspection robots, this robot structure is very simple, small-sized, lightweight, and more superior in insulation by adopting a wheel-leg moving mechanism. In addition, the robot measures the distribution voltage of an insulator together with its insulation resistance, thereby providing more information for its analysis and diagnosis. Moreover, a manual tool for its installation and removal is presented. Its effectiveness was confirmed through experiments, including a live-line test.",
"title": ""
},
{
"docid": "152701c9297aeaa4eb7d7891e6a08d8a",
"text": "End user satisfaction (EUS) is critical to successful information systems implementation. Many EUS studies in the past have attempted to identify the antecedents of EUS, yet most of Bernard Tan was the accepting senior editor for this paper. Guy Paré was the associate editor. Anne-Marie Croteau, William DeLone, and Ronald Thompson served as reviewers. the relationships found have been criticized for their lack of a strong theoretical underpinning. Today it is generally understood that IS failure is due to psychological and organizational issues rather than technological issues, hence individual differences must be addressed. This study proposes a new model with an objective to extend our understanding of the antecedents of EUS by incorporating three well-founded theories of motivation, namely expectation theory, needs theory, and equity theory. The uniqueness of the model not only recognizes the three different needs (i.e., work performance, relatedness, and self-development) that users may have with IS use, but also the corresponding inputs required from each individual to achieve those needs fulfillments, which have been ignored in most previous studies. This input/needs fulfillment ratio, referred to as equitable needs fulfillment, is likely to vary from one individual to another and satisfaction will only result in a user if the needs being fulfilled are perceived as “worthy” to obtain. The partial least squares (PLS) method of structural equation modeling was used to analyze 922 survey returns collected form the hotel and airline sectors. The results of the study show that IS end users do have different needs. Equitable work performance fulfillment and equitable relatedness fulfillment play a significant role in affecting the satisfaction of end users. The results also indicate that the impact of perceived IS performance expectations on EUS is not as significant as most previous studies have suggested. The conclusion is that merely focusing on the technical soundness of the IS and the way in which it benefits employees may not Au et al./Understanding EUS Formation 44 MIS Quarterly Vol. 32 No. 1/March 2008 be sufficient. Rather, the input requirements of users for achieving the corresponding needs fulfillments also need to be examined.",
"title": ""
},
{
"docid": "2542d28e599819a8baa2ba4c6c325214",
"text": "The paper presents the design, evaluation and performance comparison of cell based, low power adiabatic adder circuits operated by two-phase sinusoidal power clock signals, as against the literatures providing the operation of various adiabatic circuits, focusing on inverter circuits and logic gates, powered by ramp, three phase and four phase clock signals. The cells are designed for the quasi-adiabatic families, namely, 2N2P, 2N2N2P, PFAL, ADSL and IPGL for configuring complex adder circuits. A family of adiabatic cell based designs for carry lookahead adders and tree adders were designed. The simulations prove that the cell based design of tree adder circuits can save energy ranging from 2 to 100 over a frequency range of operation of 2MHz to 200MHz against the static CMOS circuit implementation. The schematic edit and T-Spice of Tanner tools formed the simulation environment.",
"title": ""
},
{
"docid": "be4b6d68005337457e66fcdf21a04733",
"text": "A real-time algorithm to detect eye blinks in a video sequence from a standard camera is proposed. Recent landmark detectors, trained on in-the-wild datasets exhibit excellent robustness against face resolution, varying illumination and facial expressions. We show that the landmarks are detected precisely enough to reliably estimate the level of the eye openness. The proposed algorithm therefore estimates the facial landmark positions, extracts a single scalar quantity – eye aspect ratio (EAR) – characterizing the eye openness in each frame. Finally, blinks are detected either by an SVM classifier detecting eye blinks as a pattern of EAR values in a short temporal window or by hidden Markov model that estimates the eye states followed by a simple state machine recognizing the blinks according to the eye closure lengths. The proposed algorithm has comparable results with the state-of-the-art methods on three standard datasets.",
"title": ""
},
{
"docid": "142f47f01a81b7978f65ea63460d98e5",
"text": "The developers of StarDog OWL/RDF DBMS have pioneered a new use of OWL as a schema language for RDF databases. This is achieved by adding integrity constraints (IC), also expressed in OWL syntax, to the traditional “open-world” OWL axioms. The new database paradigm requires a suitable visual schema editor. We propose here a two-level approach for integrated visual UML-style editing of extended OWL+IC ontologies: (i) introduce the notion of ontology splitter that can be used in conjunction with any OWL editor, and (ii) offer a custom graphical notation for axiom level annotations on the basis of compact UML-style OWL ontology editor OWLGrEd.",
"title": ""
},
{
"docid": "f0ea768c020a99ac3ed144b76893dbd9",
"text": "This paper focuses on tracking dynamic targets using a low cost, commercially available drone. The approach presented utilizes a computationally simple potential field controller expanded to operate not only on relative positions, but also relative velocities. A brief background on potential field methods is given, and the design and implementation of the proposed controller is presented. Experimental results using an external motion capture system for localization demonstrate the ability of the drone to track a dynamic target in real time as well as avoid obstacles in its way.",
"title": ""
},
{
"docid": "90acdc98c332de55e790d20d48dfde5e",
"text": "PURPOSE AND DESIGN\nSnack and Relax® (S&R), a program providing healthy snacks and holistic relaxation modalities to hospital employees, was evaluated for immediate impact. A cross-sectional survey was then conducted to assess the professional quality of life (ProQOL) in registered nurses (RNs); compare S&R participants/nonparticipants on compassion satisfaction (CS), burnout, and secondary traumatic stress (STS); and identify situations in which RNs experienced compassion fatigue or burnout and the strategies used to address these situations.\n\n\nMETHOD\nPre- and post vital signs and self-reported stress were obtained from S&R attendees (N = 210). RNs completed the ProQOL Scale measuring CS, burnout, and STS (N = 158).\n\n\nFINDINGS\nSignificant decreases in self-reported stress, respirations, and heart rate were found immediately after S&R. Low CS was noted in 28.5% of participants, 25.3% had high burnout, and 23.4% had high STS. S&R participants and nonparticipants did not differ on any of the ProQOL scales. Situations in which participants experienced compassion fatigue/burnout were categorized as patient-related, work-related, and personal/family-related. Strategies to address these situations were holistic and stress reducing.\n\n\nCONCLUSION\nProviding holistic interventions such as S&R for nurses in the workplace may alleviate immediate feelings of stress and provide a moment of relaxation in the workday.",
"title": ""
},
{
"docid": "a0501b0b3ba110692f9b162ce5f72c05",
"text": "RDF and related Semantic Web technologies have been the recent focus of much research activity. This work has led to new specifications for RDF and OWL. However, efficient implementations of these standards are needed to realize the vision of a world-wide semantic Web. In particular, implementations that scale to large, enterprise-class data sets are required. Jena2 is the second generation of Jena, a leading semantic web programmers’ toolkit. This paper describes the persistence subsystem of Jena2 which is intended to support large datasets. This paper describes its features, the changes from Jena1, relevant details of the implementation and performance tuning issues. Query optimization for RDF is identified as a promising area for future research.",
"title": ""
},
{
"docid": "bbd6a45aeefab204cb4395f676dbb2c1",
"text": "Evolving technology has created an inevitable threat of expose of data that is shared online. Wireshark tool enables the ethical hacker to reveal the flaws in the system security at the user authentication level. This approach of identifying vulnerabilities is deemed fit as the strategy involved in this testing is rapid and provides good success in identifying vulnerabilities. The usage of Wireshark also ensures that the procedure followed is up to the required standards. This paper discussed about the need to utilize penetration testing, the benefits of using Wireshark for the same and goes on to illustrate one method of using the tool to perform penetration testing. Most areas of a network are highly susceptible to security attacks by adversaries. This paper focuses on solving the aforementioned issue by surveying various tools available for penetration testing. This also provides a sample of basic penetration testing using Wireshark.",
"title": ""
},
{
"docid": "fa58c34ecd5544069fa3c58130c0f941",
"text": "Design patterns provide good solutions to re-occurring problems and several patterns and methods how to apply them have been documented for safety-critical systems. However, due to the large amount of safety-related patterns and methods, it is difficult to get an overview of their capabilities and shortcomings as there currently is no survey on safety patterns and their application methods available in literature.\n To give an overview of existing pattern-based safety development methods, this paper presents existing methods from literature and discusses and compares several aspects of these methods such as the patterns and tools they use, their integration into a safety development process, or their maturity.",
"title": ""
},
{
"docid": "bb0731a3bc69ddfe293fb1feb096f5f2",
"text": "To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., \"download\") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., \"malware\", \"download\") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.",
"title": ""
}
] |
scidocsrr
|
35a2ae757b7451d1e663fa5ebb34db61
|
A multiresolution symbolic representation of time series
|
[
{
"docid": "9a43476b4038e554c28e09bae9140e24",
"text": "The success of text-based retrieval motivates us to investigate analogous techniques which can support the querying and browsing of image data. However, images differ significantly from text both syntactically and semantically in their mode of representing and expressing information. Thus, the generalization of information retrieval from the text domain to the image domain is non-trivial. This paper presents a framework for information retrieval in the image domain which supports content-based querying and browsing of images. A critical first step to establishing such a framework is to construct a codebook of \"keywords\" for images which is analogous to the dictionary for text documents. We refer to such \"keywords\" in the image domain as \"keyblocks.\" In this paper, we first present various approaches to generating a codebook containing keyblocks at different resolutions. Then we present a keyblock-based approach to content-based image retrieval. In this approach, each image is encoded as a set of one-dimensional index codes linked to the keyblocks in the codebook, analogous to considering a text document as a linear list of keywords. Generalizing upon text-based information retrieval methods, we then offer various techniques for image-based information retrieval. By comparing the performance of this approach with conventional techniques using color and texture features, we demonstrate the effectiveness of the keyblock-based approach to content-based image retrieval.",
"title": ""
},
{
"docid": "c7d6e273065ce5ca82cd55f0ba5937cd",
"text": "Many environmental and socioeconomic time–series data can be adequately modeled using Auto-Regressive Integrated Moving Average (ARIMA) models. We call such time–series ARIMA time–series. We consider the problem of clustering ARIMA time–series. We propose the use of the Linear Predictive Coding (LPC) cepstrum of time–series for clustering ARIMA time–series, by using the Euclidean distance between the LPC cepstra of two time–series as their dissimilarity measure. We demonstrate that LPC cepstral coefficients have the desired features for accurate clustering and efficient indexing of ARIMA time–series. For example, few LPC cepstral coefficients are sufficient in order to discriminate between time–series that are modeled by different ARIMA models. In fact this approach requires fewer coefficients than traditional approaches, such as DFT and DWT. The proposed distance measure can be used for measuring the similarity between different ARIMA models as well. We cluster ARIMA time–series using the Partition Around Medoids method with various similarity measures. We present experimental results demonstrating that using the proposed measure we achieve significantly better clusterings of ARIMA time–series data as compared to clusterings obtained by using other traditional similarity measures, such as DFT, DWT, PCA, etc. Experiments were performed both on simulated as well as real data.",
"title": ""
}
] |
[
{
"docid": "1114300ff9cab6dc29e80c4d22e45e1e",
"text": "Single- and dual-feed, dual-frequency, low-profile antennas with independent tuning using varactor diodes have been demonstrated. The dual-feed planar inverted F-antenna (PIFA) has two operating frequencies which can be independently tuned from 0.7 to 1.1 GHz and from 1.7 to 2.3 GHz with better than -10 dB impedance match. The isolation between the high-band and the low-band ports is >13 dB; hence, one resonant frequency can be tuned without affecting the other. The single-feed antenna has two resonant frequencies, which can be independently tuned from 1.2 to 1.6 GHz and from 1.6 to 2.3 GHz with better than -10 dB impedance match for most of the tuning range. The tuning is done using varactor diodes with a capacitance range from 0.8 to 3.8 pF, which is compatible with RF MEMS devices. The antenna volumes are 63 × 100 × 3.15 mm3 on er = 3.55 substrates and the measured antenna efficiencies vary between 25% and 50% over the tuning range. The application areas are in carrier aggregation systems for fourth generation (4G) wireless systems.",
"title": ""
},
{
"docid": "357e03d12dc50cf5ce27cadd50ac99fa",
"text": "This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.",
"title": ""
},
{
"docid": "080d4d757747be3a28923f9f7eb7e82e",
"text": "Online social networking offers a new, easy and inexpensive way to maintain already existing relationships and present oneself to others. However, the increasing number of actions in online services also gives a rise to privacy concerns and risks. In an attempt to understand the factors, especially privacy awareness, that influence users to disclose or protect information in online environment, we view privacy behavior from the perspectives of privacy protection and information disclosing. In our empirical study, we present results from a survey of 210 users of Facebook. Our results indicate, that most of our respondents, who seem to be active users of Facebook, disclose a considerable amount of private information. Contrary to their own belief, they are not too well aware of the visibility of their information to people they do not necessarily know. Furthermore, Facebook’s privacy policy and the terms of use were largely not known or understood by our respondents.",
"title": ""
},
{
"docid": "00bd0665891eb9cd9c865074dcf89e9a",
"text": "This case report presents the treatment of a patient with skeletal Cl II malocclusion and anterior open-bite who was treated with zygomatic miniplates through the intrusion of maxillary posterior teeth. A 16-year-old female patient with a chief complaint of anterior open-bite had a symmetric face, incompetent lips, convex profile, retrusive lower lip and chin. Intraoral examination showed that the buccal segments were in Class II relationship, and there was anterior open-bite (overbite -6.5 mm). The cephalometric analysis showed Class II skeletal relationship with increased lower facial height. The treatment plan included intrusion of the maxillary posterior teeth using zygomatic miniplates followed by fixed orthodontic treatment. At the end of treatment Class I canine and molar relationships were achieved, anterior open-bite was corrected and normal smile line was obtained. Skeletal anchorage using zygomatic miniplates is an effective method for open-bite treatment through the intrusion of maxillary posterior teeth.",
"title": ""
},
{
"docid": "4eafe7f60154fa2bed78530735a08878",
"text": "Although Android's permission system is intended to allow users to make informed decisions about their privacy, it is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. We present an alternate approach to providing users the knowledge needed to make informed decisions about the applications they install. First, we create a knowledge base of mappings between API calls and fine-grained privacy-related behaviors. We then use this knowledge base to produce, through static analysis, high-level behavior profiles of application behavior. We have analyzed almost 80,000 applications to date and have made the resulting behavior profiles available both through an Android application and online. Nearly 1500 users have used this application to date. Based on 2782 pieces of application-specific feedback, we analyze users' opinions about how applications affect their privacy and demonstrate that these profiles have had a substantial impact on their understanding of those applications. We also show the benefit of these profiles in understanding large-scale trends in how applications behave and the implications for user privacy.",
"title": ""
},
{
"docid": "025953bb13772965bd757216f58d2bed",
"text": "Designers use third-party intellectual property (IP) cores and outsource various steps in their integrated circuit (IC) design flow, including fabrication. As a result, security vulnerabilities have been emerging, forcing IC designers and end-users to reevaluate their trust in hardware. If an attacker gets hold of an unprotected design, attacks such as reverse engineering, insertion of malicious circuits, and IP piracy are possible. In this paper, we shed light on the vulnerabilities in very large scale integration (VLSI) design and fabrication flow, and survey design-for-trust (DfTr) techniques that aim at regaining trust in IC design. We elaborate on four DfTr techniques: logic encryption, split manufacturing, IC camouflaging, and Trojan activation. These techniques have been developed by reusing VLSI test principles.",
"title": ""
},
{
"docid": "9083d1159628f0b9a363aca5dea47591",
"text": "Cocitation and co-word methods have long been used to detect and track emerging topics in scientific literature, but both have weaknesses. Recently, while many researchers have adopted generative probabilistic models for topic detection and tracking, few have compared generative probabilistic models with traditional cocitation and co-word methods in terms of their overall performance. In this article, we compare the performance of hierarchical Dirichlet process (HDP), a promising generative probabilistic model, with that of the 2 traditional topic detecting and tracking methods— cocitation analysis and co-word analysis. We visualize and explore the relationships between topics identified by the 3 methods in hierarchical edge bundling graphs and time flow graphs. Our result shows that HDP is more sensitive and reliable than the other 2 methods in both detecting and tracking emerging topics. Furthermore, we demonstrate the important topics and topic evolution trends in the literature of terrorism research with the HDP method.",
"title": ""
},
{
"docid": "a1fe9d395292fb3e4283f320022cacc7",
"text": "Hepatitis A is a common disease in developing countries and Albania has a high prevalence of this disease associated to young age. In spite of the occurrence of a unique serotype there are different genotypes classified from I to VII. Genotype characterisation of HAV isolates circulating in Albania has been undertaken, as well as the study of the occurrence of antigenic variants in the proteins VP3 and VP1. To evaluate the genetic variability of the Albanian hepatitis A virus (HAV) isolates, samples were collected from 12 different cities, and the VP1/2A junction amplified and sequenced. These sequences were aligned and a phylogenetic analysis performed. Additionally, the amino half sequence of the protein VP3 and the complete sequence of the VP1 was determined. Anti-HAV IgM were present in 66.2% of all the sera. Fifty HAV isolates were amplified and the analysis revealed that all the isolates were sub-genotype IA with only limited mutations. When the deduced amino acid sequences were obtained, the alignment showed only two amino acids substitutions at positions 22 and 34 of the 2A protein. A higher genomic stability of the VP1/2A region, in contrast with what occurs in other parts of the world could be observed, indicating high endemicity of HAV in Albania. In addition, two potential antigenic variants were detected. The first at position 46 of VP3 in seven isolates and the second at position 23 of VP1 in six isolates.",
"title": ""
},
{
"docid": "2e22e31edd858ac43035502979e0302e",
"text": "We perform a large-scale topology mapping and geolocation study for China's Internet. To overcome the limited number of Chinese PlanetLab nodes and looking glass servers, we leverage several unique features in China's Internet, including the hierarchical structure of the major ISPs and the abundance of IDCs. Using only 15 vantage points, we design a traceroute scheme that finds significantly more interfaces and links than iPlane with significantly fewer traceroute probes. We then consider the problem of geolocating router interfaces and end hosts in China. We develop a heuristic for clustering the interface topology of a hierarchical ISP, and then apply the heuristic to the major Chinese ISPs. We show that the clustering heuristic can geolocate router interfaces with significantly more detail and accuracy than can the existing geoIP databases in isolation, and the resulting clusters expose the major ISPs' provincial structure. Finally, using the clustering heuristic, we propose a methodology for improving commercial geoIP databases.",
"title": ""
},
{
"docid": "e4b0ac07d84c51e5c9251f907b597ab9",
"text": "Audio fingerprinting, also named as audio hashing, has been well-known as a powerful technique to perform audio identification and synchronization. It basically involves two major steps: fingerprint (voice pattern) design and matching search. While the first step concerns the derivation of a robust and compact audio signature, the second step usually requires knowledge about database and quick-search algorithms. Though this technique offers a wide range of real-world applications, to the best of the authors’ knowledge, a comprehensive survey of existing algorithms appeared more than eight years ago. Thus, in this paper, we present a more up-to-date review and, for emphasizing on the audio signal processing aspect, we focus our state-of-the-art survey on the fingerprint design step for which various audio features and their tractable statistical models are discussed. Keywords–Voice pattern; audio identification and synchronization; spectral features; statistical models.",
"title": ""
},
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
},
{
"docid": "24041042e1216a3bbf6aab89fa6f0b93",
"text": "With the increasing demand for renewable energy, distributed power included in fuel cells have been studied and developed as a future energy source. For this system, a power conversion circuit is necessary to interface the generated power to the utility. In many cases, a high step-up DC/DC converter is needed to boost low input voltage to high voltage output. Conventional methods using cascade DC/DC converters cause extra complexity and higher cost. The conventional topologies to get high output voltage use flyback DC/DC converters. They have the leakage components that cause stress and loss of energy that results in low efficiency. This paper presents a high boost converter with a voltage multiplier and a coupled inductor. The secondary voltage of the coupled inductor is rectified using a voltage multiplier. High boost voltage is obtained with low duty cycle. Theoretical analysis and experimental results verify the proposed solutions using a 300 W prototype.",
"title": ""
},
{
"docid": "c4b229c26003fe40e046a6009b28dd52",
"text": "Sleep is an important part of our lives which affects many life factors such as memory, learning, metabolism and the immune system. Researchers have found correlations between sleep and several diseases such as Chronic Obstructive Pulmonary disease, Chronic Heart Failure, Alzheimer's disease, etc. However, sleep data is mainly recorded and diagnosed in sleep labs or in hospitals for some critical cases with high costs. In this work we develop a non-invasive, wearable neck-cuff system capable of real-time monitoring and visualization of physiological signals. These signals are generated from various sensors housed in a soft neck-worn collar and sent via Bluetooth to a cell phone which stores the data. This data is processed and reported to the user or uploaded to the cloud and/or to a local PC. With this system we are able to monitor people's sleep continuously in a non-invasive and low cost method while at the same time collect a large database for sleep data which may benefit future advances in new findings and possibly enable a diagnosis of other diseases. We show as one of the applications of our system the possible detection of obstructive sleep apnea which is a common sleep disorder.",
"title": ""
},
{
"docid": "c63559ed7971471d7d4b44b85c2917ac",
"text": "Vehicle-bicycle accidents with subsequent dragging of the rider over long distances are extremely rare. The case reported here is that of a 16-year-old mentally retarded bike rider who was run over by a truck whose driver failed to notice the accident. The legs of the victim became trapped by the rear axle of the trailer and the body was dragged over 45 km before being discovered under the parked truck. The autopsy revealed that the boy had died from the initial impact and not from the dragging injuries which had caused extensive mutilation. The reports of the technical expert and the forensic pathologist led the prosecutor to drop the case against the truck driver for manslaughter.",
"title": ""
},
{
"docid": "983fc1788fe5d9358eff85d3c16d000b",
"text": "Object tracking over wide-areas, such as an airport, the downtown of a large city or any large public area, is done by multiple cameras. Especially in realistic application, those cameras have non overlapping Field of Views (FOVs). Multiple camera tracking is very important to establish correspondence among detected objects across different cameras. In this paper we investigate color histogram techniques to evaluate inter-camera tracking algorithm based on object appearances. We compute HSV and RGB color histograms in order to evaluate their performance in establishing correspondence between object appearances in different FOVs before and after Cumulative Brightness Transfer Function (CBTF).",
"title": ""
},
{
"docid": "fcd51d3b067895f168c883e2c6f0bf78",
"text": "Partial differential equations (PDEs) are indispensable for modeling many physical phenomena and also commonly used for solving image processing tasks. In the latter area, PDE-based approaches interpret image data as discretizations of multivariate functions and the output of image processing algorithms as solutions to certain PDEs. Posing image processing problems in the infinite dimensional setting provides powerful tools for their analysis and solution. Over the last few decades, the reinterpretation of classical image processing problems through the PDE lens has been creating multiple celebrated approaches that benefit a vast area of tasks including image segmentation, denoising, registration, and reconstruction. In this paper, we establish a new PDE-interpretation of a class of deep convolutional neural networks (CNN) that are commonly used to learn from speech, image, and video data. Our interpretation includes convolution residual neural networks (ResNet), which are among the most promising approaches for tasks such as image classification having improved the state-of-the-art performance in prestigious benchmark challenges. Despite their recent successes, deep ResNets still face some critical challenges associated with their design, immense computational costs and memory requirements, and lack of understanding of their reasoning. Guided by well-established PDE theory, we derive three new ResNet architectures that fall into two new classes: parabolic and hyperbolic CNNs. We demonstrate how PDE theory can provide new insights and algorithms for deep learning and demonstrate the competitiveness of three new CNN architectures using numerical experiments.",
"title": ""
},
{
"docid": "6ee98121ecffa66c2bf390db70c15e09",
"text": "Fragment structure should find its application in acquiring high isolation between multipleinput multiple-output (MIMO) antennas. By gridding a design space into fragment cells, a fragmenttype isolation structure can be constructed by metalizing some of the fragment cells. For MIMO isolation design, cells to be metalized can be selected by optimization searching scheme with objectives such as isolation, return losses, and even radiation patterns of MIMO antennas. Due to the flexibility of fragment-type isolation structure, fragment-type structure has potentials to yield isolation higher than canonical isolation structures. In this paper, multi-objective evolutionary algorithm based on decomposition combined with genetic operators (MOEA/D-GO) is applied to design fragment-type isolation structures for MIMO patch antennas and MIMO PIFAs. It is demonstrated that isolation can be improved to different extents by using fragment-type isolation design. Some technique aspects related to the fragment-type isolation design, such as effects of fragment cell size, design space, density of metal cells, and efficiency consideration, are further discussed.",
"title": ""
},
{
"docid": "8e5e9106022c630c38a7c5c75f130064",
"text": "While growing evidence suggests that sentence understanding engages perceptual and motor systems for the purpose of mentally imagining or simulating the content of utterances (Barsalou 1999) it is not known whether processing words alone does the same. We investigated whether making a decision about the form of a word would lead to activation of motor mechanisms, using a modified version of the Actionsentence Compatibility Effect (Glenberg and Kaschak 2002). Fluent signers of American Sign Language (ASL) were shown pairs of ASL signs which were either identical or not. Critical signs involved hand motion forward or backward, relative to the body. Subjects indicated whether the two signs were the same or different with a manual response requiring their hand to move either forward or backward thus in a direction either compatible or incompatible with the direction of motion denoted by the sign. Results demonstrated a compatibility effect literal and metaphorical motion signs facilitated response motion in the same direction, suggesting that mere phonological processing of a lexical item with motion meaning engages the motor system. The same experiment, performed with non-signers yielded no such effect, demonstrating that the effect was not simply the result of perceptual processing of the form of the sign. These results support an embodied view of linguistic processing where the content of language about motor actions is simulated using parts of the cognitive system responsible for actually performing the described actions.",
"title": ""
},
{
"docid": "b53eb4535cf1c841983efd8bedd4fc0a",
"text": "Automated test execution is one of the more popular and available strategies to minimize the cost for software testing, and is also becoming one of the central concepts in modern software development as methods such as test-driven development gain popularity. Published studies on test automation indicate that the maintenance and development of test automation tools commonly encounter problems due to unforeseen issues. To further investigate this, we performed a case study on a telecommunication subsystem to seek factors that contribute to inefficiencies in use, maintenance, and development of the automated testing performed within the scope of responsibility of a software design team. A qualitative evaluation of the findings indicates that the main areas of improvement in this case are in the fields of interaction design and general software design principles, as applied to test execution system development.",
"title": ""
},
{
"docid": "a12538c128f7cd49f2561170f6aaf0ac",
"text": "We also define qp(kp) = 0, k ∈ Z. Fermat quotients appear and play a major role in various questions of computational and algebraic number theory and thus their distribution modulo p has been studied in a number of works; see, for example, [1, 5, 6, 7, 8, 9, 10, 11, 13, 15, 16, 17, 18] and the references therein. In particular, the image set Ip(U) = {qp(u) : 1 ≤ u ≤ U} has been investigated in some of these works. Let Ip(U) = #Ip(U) be the cardinality of Ip(U). It is well known (see, for example, [6, Section 2]) that",
"title": ""
}
] |
scidocsrr
|
32cc6a11ae258f4a4b8a302062c636f0
|
7 Ultra Wideband Impulse Radio Superregenerative Reception
|
[
{
"docid": "71ab4e8f2d2c9a30cfd13eb881116cdf",
"text": "The superregenerative receiver has been widely used for many decades in short-range wireless communications because of its relative simplicity, reduced cost, and low power consumption. However, the theory that describes the behavior of this type of receiver, which was mainly developed prior to 1950, is of limited scope, since it applies to particular implementations, usually operating under continuous-wave signal or narrow-band modulation. As a novelty, we present the theory of superregenerative reception from a generic point of view. We develop an analytic study based on a generic block diagram of the receiver and consider not only narrow-band, but a wider variety of input signals. The study allows general results and conclusions that can be easily particularized to specific implementations to be obtained. Starting from the proposed model, the differential equation that describes the operation of the receiver in the linear mode is deducted and solved. Normalized parameters and functions characterizing the performance of the receiver are presented, as well as the requirements for proper operation. Several characteristic phenomena, such as hangover and multiple resonance, are described. The nonlinear behavior of the active device is also modeled to obtain a solution of the differential equation in the logarithmic mode of operation. The study is completed with a practical example operating at 2.4 GHz and illustrating the typical performance of a superregenerative receiver.",
"title": ""
}
] |
[
{
"docid": "92596a29e208796d87637c2fff3c40a9",
"text": "How much did a network change since yesterday? How different is the wiring between Bob’s brain (a lefthanded male) and Alice’s brain (a right-handed female)? Graph similarity with known node correspondence, i.e. the detection of changes in the connectivity of graphs, arises in numerous settings. In this work, we formally state the axioms and desired properties of the graph similarity functions, and evaluate when state-ofthe-art methods fail to detect crucial connectivity changes in graphs. We propose DELTACON, a principled, intuitive, and scalable algorithm that assesses the similarity between two graphs on the same nodes (e.g. employees of a company, customers of a mobile carrier). Experiments on various synthetic and real graphs showcase the advantages of our method over existing similarity measures. Finally, we employ DELTACON to real applications: (a) we classify people to groups of high and low creativity based on their brain connectivity graphs, and (b) do temporal anomaly detection in the who-emails-whom Enron graph.",
"title": ""
},
{
"docid": "f2af256af6a405a3b223abc5d9a276ac",
"text": "Traditional execution environments deploy Address Space Layout Randomization (ASLR) to defend against memory corruption attacks. However, Intel Software Guard Extension (SGX), a new trusted execution environment designed to serve security-critical applications on the cloud, lacks such an effective, well-studied feature. In fact, we find that applying ASLR to SGX programs raises non-trivial issues beyond simple engineering for a number of reasons: 1) SGX is designed to defeat a stronger adversary than the traditional model, which requires the address space layout to be hidden from the kernel; 2) the limited memory uses in SGX programs present a new challenge in providing a sufficient degree of entropy; 3) remote attestation conflicts with the dynamic relocation required for ASLR; and 4) the SGX specification relies on known and fixed addresses for key data structures that cannot be randomized. This paper presents SGX-Shield, a new ASLR scheme designed for SGX environments. SGX-Shield is built on a secure in-enclave loader to secretly bootstrap the memory space layout with a finer-grained randomization. To be compatible with SGX hardware (e.g., remote attestation, fixed addresses), SGX-Shield is designed with a software-based data execution protection mechanism through an LLVM-based compiler. We implement SGX-Shield and thoroughly evaluate it on real SGX hardware. It shows a high degree of randomness in memory layouts and stops memory corruption attacks with a high probability. SGX-Shield shows 7.61% performance overhead in running common microbenchmarks and 2.25% overhead in running a more realistic workload of an HTTPS server.",
"title": ""
},
{
"docid": "f7c7e00e3a2b07cd5845b26d6522d16e",
"text": "This work employed Artificial Neural Networks and Decision Trees data analysis techniques to discover new knowledge from historical data about accidents in one of Nigeria’s busiest roads in order to reduce carnage on our highways. Data of accidents records on the first 40 kilometres from Ibadan to Lagos were collected from Nigeria Road Safety Corps. The data were organized into continuous and categorical data. The continuous data were analysed using Artificial Neural Networks technique and the categorical data were also analysed using Decision Trees technique .Sensitivity analysis was performed and irrelevant inputs were eliminated. The performance measures used to determine the performance of the techniques include Mean Absolute Error (MAE), Confusion Matrix, Accuracy Rate, True Positive, False Positive and Percentage correctly classified instances. Experimental results reveal that, between the machines learning paradigms considered, Decision Tree approach outperformed the Artificial Neural Network with a lower error rate and higher accuracy rate. Our research analysis also shows that, the three most important causes of accident are Tyre burst, loss of control and over speeding.",
"title": ""
},
{
"docid": "5183794d8bef2d8f2ee4048d75a2bd3c",
"text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.",
"title": ""
},
{
"docid": "f1d00811120f666763e56e33ad2c3b10",
"text": "Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aeqitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aeqitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aeqitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aeqitas and we have evaluated it on six stateof- the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aeqitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aeqitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.",
"title": ""
},
{
"docid": "bc4ed7182695c62d7a2c8af82cdeb9fc",
"text": "The work in this paper is driven by the question how to exploit the temporal cues available in videos for their accurate classification, and for human action recognition in particular? Thus far, the vision community has focused on spatio-temporal approaches with fixed temporal convolution kernel depths. We introduce a new temporal layer that models variable temporal convolution kernel depths. We embed this new temporal layer in our proposed 3D CNN. We extend the DenseNet architecture which normally is 2D with 3D filters and pooling kernels. We name our proposed video convolutional network ‘Temporal 3D ConvNet’ (T3D) and its new temporal layer ‘Temporal Transition Layer’ (TTL). Our experiments show that T3D outperforms the current state-of-the-art methods on the HMDB51, UCF101 and Kinetics datasets. The other issue in training 3D ConvNets is about training them from scratch with a huge labeled dataset to get a reasonable performance. So the knowledge learned in 2D ConvNets is completely ignored. Another contribution in this work is a simple and effective technique to transfer knowledge from a pre-trained 2D CNN to a randomly initialized 3D CNN for a stable weight initialization. This allows us to significantly reduce the number of training samples for 3D CNNs. Thus, by finetuning this network, we beat the performance of generic and recent methods in 3D CNNs, which were trained on large video datasets, e.g. Sports-1M, and finetuned on the target datasets, e.g. HMDB51/UCF101. The T3D codes will be released soon1.",
"title": ""
},
{
"docid": "1d0c9c8c439f5fa41fee964caed7c2b1",
"text": "As interactive voice response systems become more prevalent and provide increasingly more complex functionality, it becomes clear that the challenges facing such systems are not solely in their synthesis and recognition capabilities. Issues such as the coordination of turn exchanges between system and user also play an important role in system usability. In particular, both systems and users have difficulty determining when the other is taking or relinquishing the turn. In this paper, we seek to identify turn-taking cues correlated with human–human turn exchanges which are automatically computable. We compare the presence of potential prosodic, acoustic, and lexico-syntactic turn-yielding cues in prosodic phrases preceding turn changes (smooth switches) vs. turn retentions (holds) vs. backchannels in the Columbia Games Corpus, a large corpus of task-oriented dialogues, to determine which features reliably distinguish between these three. We identify seven turn-yielding cues, all of which can be extracted automatically, for future use in turn generation and recognition in interactive voice response (IVR) systems. Testing Duncan’s (1972) hypothesis that these turn-yielding cues are linearly correlated with the occurrence of turn-taking attempts, we further demonstrate that, the greater the number of turn-yielding cues that are present, the greater the likelihood that a turn change will occur. We also identify six cues that precede backchannels, which will also be useful for IVR backchannel generation and recognition; these cues correlate with backchannel occurrence in a quadratic manner. We find similar results for overlapping and for non-overlapping speech. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4e97003a5609901f1f18be1ccbf9db46",
"text": "Fog computing is strongly emerging as a relevant and interest-attracting paradigm+technology for both the academic and industrial communities. However, architecture and methodological approaches are still prevalent in the literature, while few research activities have specifically targeted so far the issues of practical feasibility, cost-effectiveness, and efficiency of fog solutions over easily-deployable environments. In this perspective, this paper originally presents i) our fog-oriented framework for Internet-of-Things applications based on innovative scalability extensions of the open-source Kura gateway and ii) its Docker-based containerization over challenging and resource-limited fog nodes, i.e., RaspberryPi devices. Our practical experience and experimental work show the feasibility of using even extremely constrained nodes as fog gateways; the reported results demonstrate that good scalability and limited overhead can be coupled, via proper configuration tuning and implementation optimizations, with the significant advantages of containerization in terms of flexibility and easy deployment, also when working on top of existing, off-the-shelf, and limited-cost gateway nodes.",
"title": ""
},
{
"docid": "9a2e7b1be8134edf012e6d86a6f4d313",
"text": "The care of a patient in the intensive care unit extends well beyond his or her hospitalization. Evaluation of a patient after leaving the intensive care unit involves a review of the hospital stay, including principal diagnosis, exposure to medications, period spent in the intensive care unit, and history of prolonged mechanical ventilation. Fatigue should prompt evaluation for possible anemia, nutritional deficits, sleep disturbance, muscular deconditioning, and neurologic impairment. Other common problems include poor appetite with possible weight loss, falls, and sexual dysfunction. Psychological morbidities, posttraumatic stress disorder, anxiety disorder, and depression also often occur in the post-intensive care unit patient. These conditions are more common among patients with a history of delirium, prolonged sedation, mechanical ventilation, and acute respiratory distress syndrome. The physician should gain an understanding of the patient's altered quality of life, including employment status, and the state of his or her relationships with loved ones or the primary caregiver. As in many aspects of medicine, a multidisciplinary treatment approach is most beneficial to the post-intensive care unit patient.",
"title": ""
},
{
"docid": "2dc69fff31223cd46a0fed60264b2de1",
"text": "The authors offer a framework for conceptualizing collective identity that aims to clarify and make distinctions among dimensions of identification that have not always been clearly articulated. Elements of collective identification included in this framework are self-categorization, evaluation, importance, attachment and sense of interdependence, social embeddedness, behavioral involvement, and content and meaning. For each element, the authors take note of different labels that have been used to identify what appear to be conceptually equivalent constructs, provide examples of studies that illustrate the concept, and suggest measurement approaches. Further, they discuss the potential links between elements and outcomes and how context moderates these relationships. The authors illustrate the utility of the multidimensional organizing framework by analyzing the different configuration of elements in 4 major theories of identification.",
"title": ""
},
{
"docid": "faf76771bbb1f2a84148703d2bde9d25",
"text": "In this paper we describe the analysis of using Q-learning to acquire overtaking and blocking skills in simulated car racing games. Overtaking and blocking are more complicated racing skills compared to driving alone, and past work on this topic has only touched overtaking in very limited scenarios. Our work demonstrates that a driving AI agent can learn overtaking and blocking skills via machine learning, and the acquired skills are applicable when facing different opponent types and track characteristics, even on actual built-in tracks in TORCS.",
"title": ""
},
{
"docid": "46829dde25c66191bcefae3614c2dd3f",
"text": "User-generated content (UGC) on the Web, especially on social media platforms, facilitates the association of additional information with digital resources; thus, it can provide valuable supplementary content. However, UGC varies in quality and, consequently, raises the challenge of how to maximize its utility for a variety of end-users. This study aims to provide researchers and Web data curators with comprehensive answers to the following questions: What are the existing approaches and methods for assessing and ranking UGC? What features and metrics have been used successfully to assess and predict UGC value across a range of application domains? What methods can be effectively employed to maximize that value? This survey is composed of a systematic review of approaches for assessing and ranking UGC: results are obtained by identifying and comparing methodologies within the context of short text-based UGC on the Web. Existing assessment and ranking approaches adopt one of four framework types: the community-based framework takes into consideration the value assigned to content by a crowd of humans, the end-user--based framework adapts and personalizes the assessment and ranking process with respect to a single end-user, the designer-based framework encodes the software designer’s values in the assessment and ranking method, and the hybrid framework employs methods from more than one of these types. This survey suggests a need for further experimentation and encourages the development of new approaches for the assessment and ranking of UGC.",
"title": ""
},
{
"docid": "bf2f9a0387de2b2aa3136a2879a07e83",
"text": "Rich representations in reinforcement learning have been studied for the purpose of enabling generalization and making learning feasible in large state spaces. We introduce Object-Oriented MDPs (OO-MDPs), a representation based on objects and their interactions, which is a natural way of modeling environments and offers important generalization opportunities. We introduce a learning algorithm for deterministic OO-MDPs and prove a polynomial bound on its sample complexity. We illustrate the performance gains of our representation and algorithm in the well-known Taxi domain, plus a real-life videogame.",
"title": ""
},
{
"docid": "9d82a9ea1c1927d41620a45a5b4e96cb",
"text": "We introduce an approach to elastic registration of tomographic images based on thin-plate splines. Central to this scheme is a well-de ned minimizing functional for which the solution can be stated analytically. In this work, we consider the integration of anisotropic landmark errors as well as additional attributes at landmarks. As attributes we use orientations at landmarks and we incorporate the corresponding constraints through scalar products. With our approximation scheme it is thus possible to integrate statistical as well as geometric information as additional knowledge in elastic image registration. On the basis of synthetic as well as real tomographic images we show that this additional knowledge can signi cantly improve the registration result. In particular, we demonstrate that our scheme incorporating orientation attributes can preserve the shape of rigid structures (such as bone) embedded in an otherwise elastic material. This is achieved without selecting further landmarks and without a full segmentation of the rigid structures.",
"title": ""
},
{
"docid": "51c0cdb22056a3dc3f2f9b95811ca1ca",
"text": "Technology plays the major role in healthcare not only for sensory devices but also in communication, recording and display device. It is very important to monitor various medical parameters and post operational days. Hence the latest trend in Healthcare communication method using IOT is adapted. Internet of things serves as a catalyst for the healthcare and plays prominent role in wide range of healthcare applications. In this project the PIC18F46K22 microcontroller is used as a gateway to communicate to the various sensors such as temperature sensor and pulse oximeter sensor. The microcontroller picks up the sensor data and sends it to the network through Wi-Fi and hence provides real time monitoring of the health care parameters for doctors. The data can be accessed anytime by the doctor. The controller is also connected with buzzer to alert the caretaker about variation in sensor output. But the major issue in remote patient monitoring system is that the data as to be securely transmitted to the destination end and provision is made to allow only authorized user to access the data. The security issue is been addressed by transmitting the data through the password protected Wi-Fi module ESP8266 which will be encrypted by standard AES128 and the users/doctor can access the data by logging to the html webpage. At the time of extremity situation alert message is sent to the doctor through GSM module connected to the controller. Hence quick provisional medication can be easily done by this system. This system is efficient with low power consumption capability, easy setup, high performance and time to time response.",
"title": ""
},
{
"docid": "56d4abc61377dc2afa3ded978d318646",
"text": "Clothoids, i.e. curves Z(s) in RI whoem curvatures xes) are linear fitting functions of arclength ., have been nued for some time for curve fitting purposes in engineering applications. The first part of the paper deals with some basic interpolation problems for lothoids and studies the existence and uniqueness of their solutions. The second part discusses curve fitting problems for clothoidal spines, i.e. C2-carves, which are composed of finitely many clothoids. An iterative method is described for finding a clothoidal spline Z(aJ passing through given Points Z1 cR 2 . i = 0,1L.. n+ 1, which minimizes the integral frX(S) 2 ds.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "94ce2d2133bf1cc9bdae0b39b54f1d13",
"text": "Advocates of the concept of secondary traumatization propose that clinicians who provide trauma-focused treatment may be particularly at risk for experiencing secondary trauma symptoms. This specific symptom presentation purportedly develops following exposure to the traumatic experiences described by their clients. Consequently, these professionals have advocated for increases in resources devoted to the prevention and treatment of secondary trauma symptoms (e.g., enhanced clinician training, increase in availability of treatment options for affected trauma workers, etc.). A review of empirical literature examining prevalence and specificity of secondary trauma symptoms in trauma clinicians is provided. Findings are mixed and often indicate that trauma clinicians are not frequently experiencing \"clinically significant\" levels of symptoms and that these symptoms may not be uniquely associated with trauma-focused treatment. Finally, it is argued that additional clarification and research on the criterion, course, and associated impairment are needed. Recommendations for future research are provided.",
"title": ""
},
{
"docid": "056eaedfbf8c18418ea627f46fa8ac16",
"text": "The malleability of stereotyping matters in social psychology and in society. Previous work indicates rapid amygdala and cognitive responses to racial out-groups, leading some researchers to view these responses as inevitable. In this study, the methods of social-cognitive neuroscience were used to investigate how social goals control prejudiced responses. Participants viewed photographs of unfamiliar Black and White faces, under each of three social goals: social categorization (by age), social individuation (vegetable preference), and simple visual inspection (detecting a dot). One study recorded brain activity in the amygdala using functional magnetic resonance imaging, and another measured cognitive activation of stereotypes by lexical priming. Neither response to photos of the racial out-group was inevitable; instead, both responses depended on perceivers' current social-cognitive goal.",
"title": ""
},
{
"docid": "4ae82b3362756b0efed84596076ea6fb",
"text": "Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.",
"title": ""
}
] |
scidocsrr
|
cbc3503191f08b53e61319b7149532ab
|
Learning Deep Binary Descriptor with Multi-quantization
|
[
{
"docid": "5c62f66d948f15cea55c1d2c9d10f229",
"text": "This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.",
"title": ""
},
{
"docid": "0964d1cc6584f2e20496c2f02952ba46",
"text": "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45% verification accuracy on LFW is achieved with only weakly aligned faces.",
"title": ""
},
{
"docid": "90dd589be3f8f78877367486e0f66e11",
"text": "Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel \"RomePatches\" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval.",
"title": ""
}
] |
[
{
"docid": "b60e8a6f417d70499c7a6a251406c280",
"text": "Details are presented of a low cost augmented-reality system for the simulation of ultrasound guided needle insertion procedures (tissue biopsy, abscess drainage, nephrostomy etc.) for interventional radiology education and training. The system comprises physical elements; a mannequin, a mock ultrasound probe and a needle, and software elements; generating virtual ultrasound anatomy and allowing data collection. These two elements are linked by a pair of magnetic 3D position sensors. Virtual anatomic images are generated based on anatomic data derived from full body CT scans of live humans. Details of the novel aspects of this system are presented including; image generation, registration and calibration.",
"title": ""
},
{
"docid": "154fe2a3e68fae0e56b16e97075afe9b",
"text": "Most existing facial expression recognition methods assume the availability of a single emotion for each expression in the training set. However, in practical applications, an expression rarely expresses pure emotion, but often a mixture of different emotions. To address this problem, this paper deals with a more common case where multiple emotions are associated to each expression. The key idea is to learn the specific description degrees of all basic emotions for each expression and the mapping from the expression images to the emotion distributions by the proposed emotion distribution learning (EDL) method.The databases used in the experiments are the s-JAFFE database and the s-BU\\_3DFE database as they are the databases with explicit scores for each emotion on each expression image. Experimental results show that EDL can effectively deal with the emotion distribution recognition problem and perform remarkably better than the state-of-the-art multi-label learning methods.",
"title": ""
},
{
"docid": "24bd9a2f85b33b93609e03fc67e9e3a9",
"text": "With the rapid development of high-throughput technologies, researchers can sequence the whole metagenome of a microbial community sampled directly from the environment. The assignment of these metagenomic reads into different species or taxonomical classes is a vital step for metagenomic analysis, which is referred to as binning of metagenomic data. In this paper, we propose a new method TM-MCluster for binning metagenomic reads. First, we represent each metagenomic read as a set of \"k-mers\" with their frequencies occurring in the read. Then, we employ a probabilistic topic model -- the Latent Dirichlet Allocation (LDA) model to the reads, which generates a number of hidden \"topics\" such that each read can be represented by a distribution vector of the generated topics. Finally, as in the MCluster method, we apply SKWIC -- a variant of the classical K-means algorithm with automatic feature weighting mechanism to cluster these reads represented by topic distributions. Experiments show that the new method TM-MCluster outperforms major existing methods, including AbundanceBin, MetaCluster 3.0/5.0 and MCluster. This result indicates that the exploitation of topic modeling can effectively improve the binning performance of metagenomic reads.",
"title": ""
},
{
"docid": "5df16d75c20c2962183783dad89266d5",
"text": "The rapid growth of patent documents has called for the development of sophisticated patent analysis tools. Currently, there are various tools that are being utilized by organizations for analyzing patents. These tools are capable of performing wide range of tasks, such as analyzing and forecasting future technological trends, conducting strategic technology planning, detecting patent infringement, determining patents quality and the most promising patents, and identifying technological hotspots and patent vacuums. This literature review presents the state-of-the-art in patent analysis and also presents taxonomy of patent analysis techniques. Moreover, the key features and weaknesses of the discussed tools and techniques are presented and several directions for future research are highlighted. The literature review will be helpful for the researchers in finding the latest research efforts pertaining to the patent analysis in a unified form. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6660bcfd564726421d9eaaa696549454",
"text": "When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.",
"title": ""
},
{
"docid": "29df7892b16864cb3721a05886bbcc82",
"text": "With the rapid growth of the cyber attacks, sharing of cyber threat intelligence (CTI) becomes essential to identify and respond to cyber attack in timely and cost-effective manner. However, with the lack of standard languages and automated analytics of cyber threat information, analyzing complex and unstructured text of CTI reports is extremely time- and labor-consuming. Without addressing this challenge, CTI sharing will be highly impractical, and attack uncertainty and time-to-defend will continue to increase.\n Considering the high volume and speed of CTI sharing, our aim in this paper is to develop automated and context-aware analytics of cyber threat intelligence to accurately learn attack pattern (TTPs) from commonly available CTI sources in order to timely implement cyber defense actions. Our paper has three key contributions. First, it presents a novel threat-action ontology that is sufficiently rich to understand the specifications and context of malicious actions. Second, we developed a novel text mining approach that combines enhanced techniques of Natural Language Processing (NLP) and Information retrieval (IR) to extract threat actions based on semantic (rather than syntactic) relationship. Third, our CTI analysis can construct a complete attack pattern by mapping each threat action to the appropriate techniques, tactics and kill chain phases, and translating it any threat sharing standards, such as STIX 2.1. Our CTI analytic techniques were implemented in a tool, called TTPDrill, and evaluated using a randomly selected set of Symantec Threat Reports. Our evaluation tests show that TTPDrill achieves more than 82% of precision and recall in a variety of measures, very reasonable for this problem domain.",
"title": ""
},
{
"docid": "ffc713ce3f7d47ce61112bb96a591dfc",
"text": "It has been shown that injecting noise into the neural network weights during the training process leads to a better generalization of the resulting model. Noise injection in the distributed setup is a straightforward technique and it represents a promising approach to improve the locally trained models. We investigate the effects of noise injection into the neural networks during a decentralized training process. We show both theoretically and empirically that noise injection has no positive effect in expectation on linear models, though. However for non-linear neural networks we empirically show that noise injection substantially improves model quality helping to reach a generalization ability of a local model close to the serial baseline.",
"title": ""
},
{
"docid": "24a78bcc7c60ab436f6fd32bdc0d7661",
"text": "Passing the Turing Test is not a sensible goal for Artificial Intelligence. Adherence to Turing's vision from 1950 is now actively harmful to our field. We review problems with Turing's idea, and suggest that, ironically, the very cognitive science that he tried to create must reject his research goal.",
"title": ""
},
{
"docid": "7a055093ac92c7d2fa7aa8dcbe47a8b8",
"text": "In this paper, we present the design process of a smart bracelet that aims at enhancing the life of elderly people. The bracelet acts as a personal assistant during the user's everyday life, monitoring the health status and alerting him or her about abnormal conditions, reminding medications and facilitating the everyday life in many outdoor and indoor activities.",
"title": ""
},
{
"docid": "30d191f30f8d0cd0fd0d9b99a440a1df",
"text": "Despite their ubiquitous presence, texture-less objects present significant challenges to contemporary visual object detection and localization algorithms. This paper proposes a practical method for the detection and accurate 3D localization of multiple texture-less and rigid objects depicted in RGB-D images. The detection procedure adopts the sliding window paradigm, with an efficient cascade-style evaluation of each window location. A simple pre-filtering is performed first, rapidly rejecting most locations. For each remaining location, a set of candidate templates (i.e. trained object views) is identified with a voting procedure based on hashing, which makes the method's computational complexity largely unaffected by the total number of known objects. The candidate templates are then verified by matching feature points in different modalities. Finally, the approximate object pose associated with each detected template is used as a starting point for a stochastic optimization procedure that estimates accurate 3D pose. Experimental evaluation shows that the proposed method yields a recognition rate comparable to the state of the art, while its complexity is sub-linear in the number of templates.",
"title": ""
},
{
"docid": "4cb942fd2549525412b1a49590d4dfbd",
"text": "This paper proposes a new adaptive patient-cooperative control strategy for improving the effectiveness and safety of robot-assisted ankle rehabilitation. This control strategy has been developed and implemented on a compliant ankle rehabilitation robot (CARR). The CARR is actuated by four Festo Fluidic muscles located to the calf in parallel, has three rotational degrees of freedom. The control scheme consists of a position controller implemented in joint space and a high-level admittance controller in task space. The admittance controller adaptively modifies the predefined trajectory based on real-time ankle measurement, which enhances the training safety of the robot. Experiments were carried out using different modes to validate the proposed control strategy on the CARR. Three training modes include: 1) a passive mode using a joint-space position controller, 2) a patient–robot cooperative mode using a fixed-parameter admittance controller, and 3) a cooperative mode using a variable-parameter admittance controller. Results demonstrate satisfactory trajectory tracking accuracy, even when externally disturbed, with a maximum normalized root mean square deviation less than 5.4%. These experimental findings suggest the potential of this new patient-cooperative control strategy as a safe and engaging control solution for rehabilitation robots.",
"title": ""
},
{
"docid": "4d882c081dab44b941e3006b274fc91c",
"text": "A novel, highly efficient and broadband RF power amplifier (PA) operating in “continuous class-F” mode has been realized for first time. The introduction and experimental verification of this new PA mode demonstrates that it is possible to maintain expected output performance, both in terms of efficiency and power, over a very wide bandwidth. Using recently established continuous class-F theory, an output matching network was designed to terminate the first three harmonic impedances. This resulted in a PA delivering an average drain efficiency of 74% and average output power of 10.5W for an octave bandwidth between 0.55GHz and 1.1GHz. A commercially available 10W GaN HEMT transistor has been used for the PA design and realization.",
"title": ""
},
{
"docid": "4680bed6fb799e6e181cc1c2a4d56947",
"text": "We address the problem of vision-based multi-person tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. Specifically, we are interested in the application of such a system for supporting path planning algorithms in the avoidance of dynamic obstacles. The complexity of the problem calls for an integrated solution, which extracts as much visual information as possible and combines it through cognitive feedback. We propose such an approach, which jointly estimates camera position, stereo depth, object detections, and trajectories based only on visual information. The interplay between these components is represented in a graphical model. For each frame, we first estimate the ground surface together with a set of object detections. Based on these results, we then address object interactions and estimate trajectories. Finally, we employ the tracking results to predict future motion for dynamic objects and fuse this information with a static occupancy map estimated from dense stereo. The approach is experimentally evaluated on several long and challenging video sequences from busy inner-city locations recorded with different mobile setups. The results show that the proposed integration makes stable tracking and motion prediction possible, and thereby enables path planning in complex and highly dynamic scenes.",
"title": ""
},
{
"docid": "d5d03cdfd3a6d6c2b670794d76e91c8e",
"text": "We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/ ̃glai1/data/race/ and the code is available at https://github.com/ cheezer/RACE_AR_baselines.",
"title": ""
},
{
"docid": "5336d55ddbb28bf4be1a409c0c04adbd",
"text": "Non-stationarities are ubiquitous in EEG signals. They are especially apparent in the use of EEG-based brain-computer interfaces (BCIs): (a) in the differences between the initial calibration measurement and the online operation of a BCI, or (b) caused by changes in the subject's brain processes during an experiment (e.g. due to fatigue, change of task involvement, etc). In this paper, we quantify for the first time such systematic evidence of statistical differences in data recorded during offline and online sessions. Furthermore, we propose novel techniques of investigating and visualizing data distributions, which are particularly useful for the analysis of (non-)stationarities. Our study shows that the brain signals used for control can change substantially from the offline calibration sessions to online control, and also within a single session. In addition to this general characterization of the signals, we propose several adaptive classification schemes and study their performance on data recorded during online experiments. An encouraging result of our study is that surprisingly simple adaptive methods in combination with an offline feature selection scheme can significantly increase BCI performance.",
"title": ""
},
{
"docid": "dffc11786d4a0d9247e22445f48d8fca",
"text": "Tuberization in potato (Solanum tuberosum L.) is a complex biological phenomenon which is affected by several environmental cues, genetic factors and plant nutrition. Understanding the regulation of tuber induction is essential to devise strategies to improve tuber yield and quality. It is well established that short-day photoperiods promote tuberization, whereas long days and high-temperatures inhibit or delay tuberization. Worldwide research on this complex biological process has yielded information on the important bio-molecules (proteins, RNAs, plant growth regulators) associated with the tuberization process in potato. Key proteins involved in the regulation of tuberization include StSP6A, POTH1, StBEL5, StPHYB, StCONSTANS, Sucrose transporter StSUT4, StSP5G, etc. Biomolecules that become transported from \"source to sink\" have also been suggested to be important signaling candidates regulating the tuberization process in potatos. Four molecules, namely StSP6A protein, StBEL5 RNA, miR172 and GAs, have been found to be the main candidates acting as mobile signals for tuberization. These biomolecules can be manipulated (overexpressed/inhibited) for improving the tuberization in commercial varieties/cultivars of potato. In this review, information about the genes/proteins and their mechanism of action associated with the tuberization process is discussed.",
"title": ""
},
{
"docid": "940613490eb3b248f62524d3ea695445",
"text": "This paper describes an implemented on-line English grammar checker for students of English as a second language. This system focuses on a limited category of frequently occurring grammatical mistakes in essays written by students in the English Language Programs at the University of X. The grammar checker exploits the syntactic domain of locality from a Combina-tory Categorial Grammar for the purpose of identifying speciic types of grammatical mistakes as well as accepting grammatical expressions. It includes grammatical mistakes as ungrammatical variations of the constituents that can be related to given lexical entries in a categorial lexicon. The grammar checker is developed using one set of essays and tested against another set of essays. Unpredicted grammatical mistakes are either incorporated into the next revision or left as they are, depending on the diiculty and desirability of detecting them. The system also provides an interactive Internet interface for convenience of use. Finally, we discuss issues of constructing a full-scale English lexicon for an educational domain.",
"title": ""
},
{
"docid": "39389330b090031c22554669f84939a7",
"text": "The wide use of abbreviations in modern texts poses interesting challenges and opportunities in the field of NLP. In addition to their dynamic nature, abbreviations are highly polysemous with respect to regular words. Technologies that exhibit some level of language understanding may be adversely impacted by the presence of abbreviations. This paper addresses two related problems: (1) expansion of abbreviations given a context, and (2) translation of sentences with abbreviations. First, an efficient retrieval-based method for English abbreviation expansion is presented. Then, a hybrid system is used to pick among simple abbreviation-translation methods. The hybrid system achieves an improvement of 1.48 BLEU points over the baseline MT system, using sentences that contain abbreviations as a test set.",
"title": ""
},
{
"docid": "305dac2ffd4a04fa0ef9ca727edc6247",
"text": "A new control strategy for obtaining the maximum traction force of electric vehicles with individual rear-wheel drive is presented. A sliding-mode observer is proposed to estimate the wheel slip and vehicle velocity under unknown road conditions by measuring only the wheel speeds. The proposed observer is based on the LuGre dynamic friction model and allows the maximum transmissible torque for each driven wheel to be obtained instantaneously. The maximum torque can be determined at any operating point and road condition, thus avoiding wheel skid. The proposed strategy maximizes the traction force while avoiding tire skid by controlling the torque of each traction motor. Simulation results using a complete vehicle model under different road conditions are presented to validate the proposed strategy.",
"title": ""
},
{
"docid": "831ea386dcb15a6967196b90cf3b6516",
"text": "Advanced metering infrastructure (AMI) is an imperative component of the smart grid, as it is responsible for collecting, measuring, analyzing energy usage data, and transmitting these data to the data concentrator and then to a central system in the utility side. Therefore, the security of AMI is one of the most demanding issues in the smart grid implementation. In this paper, we propose an intrusion detection system (IDS) architecture for AMI which will act as a complimentary with other security measures. This IDS architecture consists of three local IDSs placed in smart meters, data concentrators, and central system (AMI headend). For detecting anomaly, we use data stream mining approach on the public KDD CUP 1999 data set for analysis the requirement of the three components in AMI. From our result and analysis, it shows stream data mining technique shows promising potential for solving security issues in AMI.",
"title": ""
}
] |
scidocsrr
|
21a8779ba69151f965ce3ee2c0bef2b1
|
A High-Frequency Three-Level Buck Converter With Real-Time Calibration and Wide Output Range for Fast-DVS
|
[
{
"docid": "e2175b85f438342a84453b5ad36ab4c5",
"text": "This paper presents a systematic analysis of integrated 3-level buck converters under both ideal and real conditions as a guidance for designing robust and fast 3-level buck converters. Under ideal conditions, the voltage conversion ratio, the output voltage ripple and, in particular, the system's loop-gain function are derived. Design considerations for real circuitry implementations of an integrated 3-level converter, such as the implementation of the flying capacitor, the impacts of the parasitic capacitors of the flying capacitor and the 4 power switches, and the time mismatch between the 2 duty-cycle signals are thoroughly discussed. Under these conditions, the voltage conversion ratio, the voltage across the flying capacitor and the power efficiency are analyzed and verified with Cadence simulation results. The loop-gain function of an integrated 3-level buck converter with parasitic capacitors and time mismatch is derived with the state-space averaging method. The derived loop-gain functions are verified with time-domain small signal injection simulation and measurement, with a good match between the analytical and experimental results.",
"title": ""
},
{
"docid": "b936c3cd8c64a7b7254e003918fb91d5",
"text": "On-chip DC-DC converters have the potential to offer fine-grain power management in modern chip-multiprocessors. This paper presents a fully integrated 3-level DC-DC converter, a hybrid of buck and switched-capacitor converters, implemented in 130 nm CMOS technology. The 3-level converter enables smaller inductors (1 nH) than a buck, while generating a wide range of output voltages compared to a 1/2 mode switched-capacitor converter. The test-chip prototype delivers up to 0.85 A load current while generating output voltages from 0.4 to 1.4 V from a 2.4 V input supply. It achieves 77% peak efficiency at power density of 0.1 W/mm2 and 63% efficiency at maximum power density of 0.3 W/mm2. The converter scales output voltage from 0.4 V to 1.4 V (or vice-versa) within 20 ns at a constant 450 mA load current. A shunt regulator reduces peak-to-peak voltage noise from 0.27 V to 0.19 V under pseudo-randomly fluctuating load currents. Using simulations across a wide range of design parameters, the paper compares conversion efficiencies of the 3-level, buck and switched-capacitor converters.",
"title": ""
},
{
"docid": "abe4b6d122d4d13374d70a886906aba7",
"text": "A 100-MHz PWM fully integrated buck converter utilizing standard package bondwire as power inductor with enhanced light-load efficiency which occupies 2.25 mm2 in 0.13-μm CMOS is presented. Standard package bondwire instead of on-chip spiral metal or special spiral bondwire is implemented as power inductor to minimize the cost and the conduction loss of an integrated inductor. The accuracy requirement of bondwire inductance is relaxed by an extra discontinuous-conduction-mode (DCM) calibration loop, which solves the precise DCM operation issue of fully integrated converters and eliminates the reverse current-related loss, thus enabling the use of standard package bondwire inductor with various packaging techniques. Optimizations of the power transistors, the input decoupling capacitor (CI), and the controller are also presented to achieve an efficient and robust high-frequency design. With all three major power losses, conduction loss, switching loss, and reverse current related loss, optimized or eliminated, the efficiency is significantly improved. An efficiency of 74.8% is maintained at 10 mA, and a peak efficiency of 84.7% is measured at nominal operating conditions with a voltage conversion of 1.2 to 0.9 V. Converters with various bondwire inductances from 3 to 8.5 nH are measured to verify the reliability and compatibility of different packaging techniques.",
"title": ""
}
] |
[
{
"docid": "b11decd397b775ab7103e747ba67ba19",
"text": "Over the last 60 years, the spotlight of research has periodically returned to the cerebellum as new techniques and insights have emerged. Because of its simple homogeneous structure, limited diversity of cell types and characteristic behavioral pathologies, the cerebellum is a natural home for studies of cell specification, patterning, and neuronal migration. However, recent evidence has extended the traditional range of perceived cerebellar function to include modulation of cognitive processes and implicated cerebellar hypoplasia and Purkinje neuron hypo-cellularity with autistic spectrum disorder. In the light of this emerging frontier, we review the key stages and genetic mechanisms behind cerebellum development. In particular, we discuss the role of the midbrain hindbrain isthmic organizer in the development of the cerebellar vermis and the specification and differentiation of Purkinje cells and granule neurons. These developmental processes are then considered in relation to recent insights into selected human developmental cerebellar defects: Joubert syndrome, Dandy-Walker malformation, and pontocerebellar hypoplasia. Finally, we review current research that opens up the possibility of using the mouse as a genetic model to study the role of the cerebellum in cognitive function.",
"title": ""
},
{
"docid": "1fd0f4fd2d63ef3a71f8c56ce6a25fb5",
"text": "A new ‘growing’ maximum likelihood classification algorithm for small reservoir delineation has been developed and is tested with Radarsat-2 data for reservoirs in the semi-arid Upper East Region, Ghana. The delineation algorithm is able to find the land-water boundary from SAR imagery for different weather and environmental conditions. As such, the algorithm allows for remote sensed operational monitoring of small reservoirs.",
"title": ""
},
{
"docid": "e82459841d697a538f3ab77817ed45e7",
"text": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.",
"title": ""
},
{
"docid": "985c7b11637706e60726cf168790e594",
"text": "This Exploratory paper’s second part reveals the detail technological aspects of Hand Gesture Recognition (HGR) System. It further explored HGR basic building blocks, its application areas and challenges it faces. The paper also provides literature review on latest upcoming techniques like – Point Grab, 3D Mouse and Sixth-Sense etc. The paper concluded with focus on major Application fields.",
"title": ""
},
{
"docid": "4405611eafc1f6df4c4fa0b60a50f90d",
"text": "Balancing robot which is proposed in this paper is a robot that relies on two wheels in the process of movement. Unlike the other mobile robot which is mechanically stable in its standing position, balancing robot need a balancing control which requires an angle value to be used as tilt feedback. The balancing control will control the robot, so it can maintain its standing position. Beside the balancing control itself, the movement of balancing robot needs its own control in order to control the movement while keeping the robot balanced. Both controllers will be combined since will both of them control the same wheel as the actuator. In this paper we proposed a cascaded PID control algorithm to combine the balancing and movement or distance controller. The movement of the robot is controlled using a distance controller that use rotary encoder sensor to measure its traveled distance. The experiment shows that the robot is able to climb up on 30 degree sloping board. By cascading the distance control to the balancing control, the robot is able to move forward, turning, and reach the desired position by calculating the body's tilt angle.",
"title": ""
},
{
"docid": "d6fbe041eb639e18c3bb9c1ed59d4194",
"text": "Based on discrete event-triggered communication scheme (DETCS), this paper is concerned with the satisfactory H ! / H 2 event-triggered fault-tolerant control problem for networked control system (NCS) with α -safety degree and actuator saturation constraint from the perspective of improving satisfaction of fault-tolerant control and saving network resource. Firstly, the closed-loop NCS model with actuator failures and actuator saturation is built based on DETCS; Secondly, based on Lyapunov-Krasovskii function and the definition of α -safety degree given in the paper, a sufficient condition is presented for NCS with the generalized H2 and H! performance, which is the contractively invariant set of fault-tolerance with α -safety degree, and the co-design method for event-triggered parameter and satisfactory faulttolerant controller is also given in this paper. Moreover, the simulation example verifies the feasibility of improving system satisfaction and the effectiveness of saving network resource for the method. Finally, the compatibility analysis of the related indexes is also discussed and analyzed.",
"title": ""
},
{
"docid": "d6b3969a6004b5daf9781c67c2287449",
"text": "Lotilaner is a new oral ectoparasiticide from the isoxazoline class developed for the treatment of flea and tick infestations in dogs. It is formulated as pure S-enantiomer in flavoured chewable tablets (Credelio™). The pharmacokinetics of lotilaner were thoroughly determined after intravenous and oral administration and under different feeding regimens in dogs. Twenty-six adult beagle dogs were enrolled in a pharmacokinetic study evaluating either intravenous or oral administration of lotilaner. Following the oral administration of 20 mg/kg, under fed or fasted conditions, or intravenous administration of 3 mg/kg, blood samples were collected up to 35 days after treatment. The effects of timing of offering food and the amount of food consumed prior or after dosing on bioavailability were assessed in a separate study in 25 adult dogs. Lotilaner blood concentrations were measured using a validated liquid chromatography/tandem mass spectrometry (LC-MS/MS) method. Pharmacokinetic parameters were calculated by non-compartmental analysis. In addition, in vivo enantiomer stability was evaluated in an analytical study. Following oral administration in fed animals, lotilaner was readily absorbed and peak blood concentrations reached within 2 hours. The terminal half-life was 30.7 days. Food enhanced the absorption, providing an oral bioavailability above 80% and reduced the inter-individual variability. Moreover, the time of feeding with respect to dosing (fed 30 min prior, fed at dosing or fed 30 min post-dosing) or the reduction of the food ration to one-third of the normal daily ration did not impact bioavailability. Following intravenous administration, lotilaner had a low clearance of 0.18 l/kg/day, large volumes of distribution Vz and Vss of 6.35 and 6.45 l/kg, respectively and a terminal half-life of 24.6 days. In addition, there was no in vivo racemization of lotilaner. The pharmacokinetic properties of lotilaner administered orally as a flavoured chewable tablet (Credelio™) were studied in detail. With a Tmax of 2 h and a terminal half-life of 30.7 days under fed conditions, lotilaner provides a rapid onset of flea and tick killing activity with consistent and sustained efficacy for at least 1 month.",
"title": ""
},
{
"docid": "7678163641a37a02474bd42a48acec16",
"text": "Thiopurine S-methyltransferase (TPMT) is involved in the metabolism of thiopurine drugs. Patients that due to genetic variation lack this enzyme or have lower levels than normal, can be adversely affected if normal doses of thiopurines are prescribed. The evidence for measuring TPMT prior to starting patients on thiopurine drug therapy has been reviewed and the various approaches to establishing a service considered. Until recently clinical guidelines on the use of the TPMT varied by medical specialty. This has now changed, with clear guidance encouraging clinicians to use the TPMT test prior to starting any patient on thiopurine therapy. The TPMT test is the first pharmacogenomic test that has crossed from research to routine use. Several analytical approaches can be taken to assess TPMT status. The use of phenotyping supported with genotyping on selected samples has emerged as the analytical model that has enabled national referral services to be developed to a high level in the UK. The National Health Service now has access to cost-effective and timely TPMT assay services, with two laboratories undertaking the majority of the work at national level and with several local services developing. There appears to be adequate capacity and an appropriate internal market to ensure that TPMT assay services are commensurate with the clinical demand.",
"title": ""
},
{
"docid": "00828ab21f8bb19a5621d6964636425e",
"text": "Deep neural networks (DNN) have achieved huge practical suc cess in recent years. However, its theoretical properties (in particular genera lization ability) are not yet very clear, since existing error bounds for neural networks cannot be directly used to explain the statistical behaviors of practically adopte d DNN models (which are multi-class in their nature and may contain convolutional l ayers). To tackle the challenge, we derive a new margin bound for DNN in this paper, in which the expected0-1 error of a DNN model is upper bounded by its empirical margin e rror plus a Rademacher Average based capacity term. This new boun d is very general and is consistent with the empirical behaviors of DNN models ob erved in our experiments. According to the new bound, minimizing the emp irical margin error can effectively improve the test performance of DNN. We ther efore propose large margin DNN algorithms, which impose margin penalty terms to the cross entropy loss of DNN, so as to reduce the margin error during the traini ng process. Experimental results show that the proposed algorithms can achiev e s gnificantly smaller empirical margin errors, as well as better test performance s than the standard DNN algorithm.",
"title": ""
},
{
"docid": "11112e1738bd27f41a5b57f07b71292c",
"text": "Rotor-cage fault detection in inverter-fed induction machines is still difficult nowadays as the dynamics introduced by the control or load influence the fault-indicator signals commonly applied. In addition, detection is usually possible only when the machine is operated above a specific load level to generate a significant rotor-current magnitude. This paper proposes a new method of detecting rotor-bar defects at zero load and almost at standstill. The method uses the standard current sensors already present in modern industrial inverters and, hence, is noninvasive. It is thus well suited as a start-up test for drives. By applying an excitation with voltage pulses using the switching of the inverter and then measuring the resulting current slope, a new fault indicator is obtained. As a result, it is possible to clearly identify the fault-induced asymmetry in the machine's transient reactances. Although the transient-flux linkage cannot penetrate the rotor because of the cage, the faulty bar locally influences the zigzag flux, leading to a significant change in the transient reactances. Measurement results show the applicability and sensitivity of the proposed method.",
"title": ""
},
{
"docid": "1feb96d640980e53b2d78f49b58a1a07",
"text": "The Machine Learning (ML) field has gained its momentum in almost any domain of research and just recently has become a reliable tool in the medical domain. The empirical domain of automatic learning is used in tasks such as medical decision support, medical imaging, protein-protein interaction, extraction of medical knowledge, and for overall patient management care. ML is envisioned as a tool by which computer-based systems can be integrated in the healthcare field in order to get a better, more efficient medical care. This paper describes a ML-based methodology for building an application that is capable of identifying and disseminating healthcare information. It extracts sentences from published medical papers that mention diseases and treatments, and identifies semantic relations that exist between diseases and treatments. Our evaluation results for these tasks show that the proposed methodology obtains reliable outcomes that could be integrated in an application to be used in the medical care domain. The potential value of this paper stands in the ML settings that we propose and in the fact that we outperform previous results on the same data set.",
"title": ""
},
{
"docid": "19ab044ed5154b4051cae54387767c9b",
"text": "An approach is presented for minimizing power consumption for digital systems implemented in CMOS which involves optimization at all levels of the design. This optimization includes the technology used to implement the digital circuits, the circuit style and topology, the architecture for implementing the circuits and at the highest level the algorithms that are being implemented. The most important technology consideration is the threshold voltage and its control which allows the reduction of supply voltage without signijcant impact on logic speed. Even further supply reductions can be made by the use of an architecture-based voltage scaling strategy, which uses parallelism and pipelining, to tradeoff silicon area and power reduction. Since energy is only consumed when capacitance is being switched, power can be reduced by minimizing this capacitance through operation reduction, choice of number representation, exploitation of signal correlations, resynchronization to minimize glitching, logic design, circuit design, and physical design. The low-power techniques that are presented have been applied to the design of a chipset for a portable multimedia terminal that supports pen input, speech I/O and fullmotion video. The entire chipset that perjorms protocol conversion, synchronization, error correction, packetization, buffering, video decompression and D/A conversion operates from a 1.1 V supply and consumes less than 5 mW.",
"title": ""
},
{
"docid": "319dcab62b88bd91095768023db79984",
"text": "Purpose—The aim of this guideline is to provide a synopsis of best clinical practices in the rehabilitative care of adults recovering from stroke. Methods—Writing group members were nominated by the committee chair on the basis of their previous work in relevant topic areas and were approved by the American Heart Association (AHA) Stroke Council’s Scientific Statement Oversight Committee and the AHA’s Manuscript Oversight Committee. The panel reviewed relevant articles on adults using computerized searches of the medical literature through 2014. The evidence is organized within the context of the AHA framework and is classified according to the joint AHA/American College of Cardiology and supplementary AHA methods of classifying the level of certainty and the class and level of evidence. The document underwent extensive AHA internal and external peer review, Stroke Council Leadership review, and Scientific Statements Oversight Committee review before consideration and approval by the AHA Science Advisory and Coordinating Committee. Results—Stroke rehabilitation requires a sustained and coordinated effort from a large team, including the patient and his or her goals, family and friends, other caregivers (eg, personal care attendants), physicians, nurses, physical and occupational therapists, speech-language pathologists, recreation therapists, psychologists, nutritionists, social workers, and others. Communication and coordination among these team members are paramount in maximizing the effectiveness and efficiency of rehabilitation and underlie this entire guideline. Without communication and coordination, isolated efforts to rehabilitate the stroke survivor are unlikely to achieve their full potential. Guidelines for Adult Stroke Rehabilitation and Recovery A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association",
"title": ""
},
{
"docid": "b508eee12c615b44b8b671790cf77d77",
"text": "Many search engine users face problems while retrieving their required Information. For example, a user may find it is difficult to retrieve sufficient relevant information because he use too few keywords to search or the user is inexperienced and do not search using proper keywords and the search engine is not able to receive the user real meaning through his given keywords. Also, due to the recent improvements of search engines and the rapid growth of the web, the search engines return a huge number of web pages, and then the user may take long time to look at all of these pages to find his needed information. The problem of obtaining relevant results in web searching has been tackled by several approaches. Although very effective techniques are currently used by the most popular search engines, but no a priori knowledge on the user's desires beside the search keywords is available. In this paper, we present an approach for optimizing the search engine results using artificial intelligence techniques such as document clustering and genetic algorithm to provide the user with the most relevant pages to the search query. The proposed method uses the Meta-data that is coming from the user preferences or the search engine query log files. These data is important to find the most related information to the user while searching the web. Finally, the method",
"title": ""
},
{
"docid": "28c19bf17c76a6517b5a7834216cd44d",
"text": "The concept of augmented reality audio characterizes techniques where a real sound environment is extended with virtual auditory environments and communications scenarios. A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones. When microphone signals are routed directly to the earphones, a user is exposed to a pseudoacoustic representation of the real environment. Virtual sound events are then mixed with microphone signals to produce a hybrid, an augmented reality audio representation, for the user. An overview of related technology, literature, and application scenarios is provided. Listening test results with a prototype system show that the proposed system has interesting properties. For example, in some cases listeners found it very difficult to determine which sound sources in an augmented reality audio representation are real and which are virtual.",
"title": ""
},
{
"docid": "5c111a5a30f011e4f47fb9e2041644f9",
"text": "Since the audio recapture can be used to assist audio splicing, it is important to identify whether a suspected audio recording is recaptured or not. However, few works on such detection have been reported. In this paper, we propose an method to detect the recaptured audio based on deep learning and we investigate two deep learning techniques, i.e., neural network with dropout method and stack auto-encoders (SAE). The waveform samples of audio frame is directly used as the input for the deep neural network. The experimental results show that error rate around 7.5% can be achieved, which indicates that our proposed method can successfully discriminate recaptured audio and original audio.",
"title": ""
},
{
"docid": "718433393201b5521a003df6503fe18b",
"text": "The issue of potential data misuse rises whenever it is collected from several sources. In a common setting, a large database is either horizontally or vertically partitioned between multiple entities who want to find global trends from the data. Such tasks can be solved with secure multi-party computation (MPC) techniques. However, practitioners tend to consider such solutions inefficient. Furthermore, there are no established tools for applying secure multi-party computation in real-world applications. In this paper, we describe Sharemind—a toolkit, which allows data mining specialist with no cryptographic expertise to develop data mining algorithms with good security guarantees. We list the building blocks needed to deploy a privacy-preserving data mining application and explain the design decisions that make Sharemind applications efficient in practice. To validate the practical feasibility of our approach, we implemented and benchmarked four algorithms for frequent itemset mining.",
"title": ""
},
{
"docid": "ad1d572a7ee58c92df5d1547fefba1e8",
"text": "The primary source for the blood supply of the head of the femur is the deep branch of the medial femoral circumflex artery (MFCA). In posterior approaches to the hip and pelvis the short external rotators are often divided. This can damage the deep branch and interfere with perfusion of the head. We describe the anatomy of the MFCA and its branches based on dissections of 24 cadaver hips after injection of neoprene-latex into the femoral or internal iliac arteries. The course of the deep branch of the MFCA was constant in its extracapsular segment. In all cases there was a trochanteric branch at the proximal border of quadratus femoris spreading on to the lateral aspect of the greater trochanter. This branch marks the level of the tendon of obturator externus, which is crossed posteriorly by the deep branch of the MFCA. As the deep branch travels superiorly, it crosses anterior to the conjoint tendon of gemellus inferior, obturator internus and gemellus superior. It then perforates the joint capsule at the level of gemellus superior. In its intracapsular segment it runs along the posterosuperior aspect of the neck of the femur dividing into two to four subsynovial retinacular vessels. We demonstrated that obturator externus protected the deep branch of the MFCA from being disrupted or stretched during dislocation of the hip in any direction after serial release of all other soft-tissue attachments of the proximal femur, including a complete circumferential capsulotomy. Precise knowledge of the extracapsular anatomy of the MFCA and its surrounding structures will help to avoid iatrogenic avascular necrosis of the head of the femur in reconstructive surgery of the hip and fixation of acetabular fractures through the posterior approach.",
"title": ""
}
] |
scidocsrr
|
27159783fa73d81f92ff1cc06419ce9d
|
Efficient Inferencing of Compressed Deep Neural Networks
|
[
{
"docid": "5116079b69aeb1858177429fabd10f80",
"text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.",
"title": ""
},
{
"docid": "0c12fd61acd9e02be85b97de0cc79801",
"text": "As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb everincreasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.",
"title": ""
}
] |
[
{
"docid": "2d13bda0defb815bdc51e02262b78222",
"text": "A method has been devised for the electrophoretic transfer of proteins from polyacrylamide gels to nitrocellulose sheets. The method results in quantitative transfer of ribosomal proteins from gels containing urea. For sodium dodecyl sulfate gels, the original band pattern was obtained with no loss of resolution, but the transfer was not quantitative. The method allows detection of proteins by autoradiography and is simpler than conventional procedures. The immobilized proteins were detectable by immunological procedures. All additional binding capacity on the nitrocellulose was blocked with excess protein; then a specific antibody was bound and, finally, a second antibody directed against the first antibody. The second antibody was either radioactively labeled or conjugated to fluorescein or to peroxidase. The specific protein was then detected by either autoradiography, under UV light, or by the peroxidase reaction product, respectively. In the latter case, as little as 100 pg of protein was clearly detectable. It is anticipated that the procedure will be applicable to analysis of a wide variety of proteins with specific reactions or ligands.",
"title": ""
},
{
"docid": "f577f970f841d8dee34e524ba661e727",
"text": "The rapid growth in the amount of user-generated content (UGCs) online necessitates for social media companies to automatically extract knowledge structures (concepts) from user-generated images (UGIs) and user-generated videos (UGVs) to provide diverse multimedia-related services. For instance, recommending preference-aware multimedia content, the understanding of semantics and sentics from UGCs, and automatically computing tag relevance for UGIs are benefited from knowledge structures extracted from multiple modalities. Since contextual information captured by modern devices in conjunction with a media item greatly helps in its understanding, we leverage both multimedia content and contextual information (eg., spatial and temporal metadata) to address above-mentioned social media problems in our doctoral research. We present our approaches, results, and works in progress on these problems.",
"title": ""
},
{
"docid": "ff81d8b7bdc5abbd9ada376881722c02",
"text": "Along with the progress of miniaturization and energy saving technologies of sensors, biological information in our daily life can be monitored by installing the sensors to a lavatory bowl. Lavatory is usually shared among several people, therefore biological information need to be identified. Using camera, microphone, or scales is not appropriate considering privacy in a lavatory. In this paper, we focus on the difference in the way of pulling a toilet paper roll and propose a system that identifies individuals based on features of rotation of a toilet paper roll with a gyroscope. The evaluation results confirmed that 85.8% accuracy was achieved for a five-people group in a laboratory environment.",
"title": ""
},
{
"docid": "f57b49cef2e90b8d8029dafaf59973a3",
"text": "Logic emerged as the discipline of reasoning and its syllogistic fragment investigates one of the most fundamental aspect of human reasoning. However, empirical studies have shown that human inference differs from what is characterized by traditional logical validity. In order to better characterize the patterns of human reasoning, psychologists and philosophers have proposed a number of theories of syllogistic reasoning. We contribute to this endeavor by proposing a model based on natural logic with empirically weighted inference rules. Following the mental logic tradition, our basic assumptions are, firstly, natural language sentences are the mental representation of reasoning; secondly, inference rules are among the basic mental operations of reasoning; thirdly, subjects make guesses that depend on a few heuristics. We implemented the model and trained it with the experimental data. The model was able to make around 95% correct predictions and, as far as we can see from the data we have access to, it outperformed all other syllogistic theories. We further discuss the psychological plausibility of the model and the possibilities of extending the model to cover larger fragments of natural language.",
"title": ""
},
{
"docid": "f3375c52900c245ede8704a2c1cfbc9b",
"text": "In 2000 Hone and Graham [4] published ‘Towards a tool for the subjective assessment of speech system interfaces (SASSI)’. This position paper argues that the time is right to turn the theoretical foundations established in this earlier paper into a fully validated and score-able real world tool which can be applied to the usability measurement of current speech based systems. We call for a collaborative effort to refine the current question set and then collect and share sufficient data using the revised tool to allow establishment of its psychometric properties as a valid and reliable measure of speech system usability.",
"title": ""
},
{
"docid": "ad6bb165620dafb7dcadaca91c9de6b0",
"text": "This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression.",
"title": ""
},
{
"docid": "1919e173d8bfbff038837322794f0ca1",
"text": "In this tutorial, we provided a comprehensive overview of coalitional game theory, and its usage in wireless and communication networks. For this purpose, we introduced a novel classification of coalitional games by grouping the sparse literature into three distinct classes of games: canonical coalitional games, coalition formation games, and coalitional graph games. For each class, we explained in details the fundamental properties, discussed the main solution concepts, and provided an in-depth analysis of the methodologies and approaches for using these games in both game theory and communication applications. The presented applications have been carefully selected from a broad range of areas spanning a diverse number of research problems. The tutorial also sheds light on future opportunities for using the strong analytical tool of coalitional games in a number of applications. In a nutshell, this article fills a void in existing communications literature, by providing a novel tutorial on applying coalitional game theory in communication networks through comprehensive theory and technical details as well as through practical examples drawn from both game theory and communication application.",
"title": ""
},
{
"docid": "377bfe9d8900347ef89be614f5cb49db",
"text": "The function of the comparing fingerprints algorithm was to judge whether a new partitioned data chunk was in a storage system a decade ago. At present, in the most de-duplication backup system the fingerprints of the big data chunks are huge and cannot be stored in the memory completely. The performance of the system is unavoidably retarded by data chunks accessing the storage system at the querying stage. Accordingly, a new query mechanism namely Two-stage Bloom Filter (TBF) mechanism is proposed. Firstly, as a representation of the entirety for the first grade bloom filter, each bit of the second grade bloom filter in the TBF represents the chunks having the identical fingerprints reducing the rate of false positives. Secondly, a two-dimensional list is built corresponding to the two grade bloom filter for the absolute addresses of the data chunks with the identical fingerprints. Finally, a new hash function class with the strong global random characteristic is set up according to the data fingerprints’ random characteristics. To reduce the comparing data greatly, TBF decreases the number of accessing disks, improves the speed of detecting the redundant data chunks, and reduces the rate of false positives which helps the improvement of the overall performance of system.",
"title": ""
},
{
"docid": "2a057079c544b97dded598b6f0d750ed",
"text": "Introduction Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions:",
"title": ""
},
{
"docid": "0749c071fd4bb1784ca7eca51a25d955",
"text": "Digital camera and mobile document image acquisition are new trends arising in the world of Optical Character Recognition and text detection. In some cases, such process integrates many distortions and produces poorly scanned text or text-photo images and natural images, leading to an unreliable OCR digitization. In this paper, we present a novel nonparametric and unsupervised method to compensate for undesirable document image distortions aiming to optimally improve OCR accuracy. Our approach relies on a very efficient stack of document image enhancing techniques to recover deformation of the entire document image. First, we propose a local brightness and contrast adjustment method to effectively handle lighting variations and the irregular distribution of image illumination. Second, we use an optimized greyscale conversion algorithm to transform our document image to greyscale level. Third, we sharpen the useful information in the resulting greyscale image using Un-sharp Masking method. Finally, an optimal global binarization approach is used to prepare the final document image to OCR recognition. The proposed approach can significantly improve text detection rate and optical character recognition accuracy. To demonstrate the efficiency of our approach, an exhaustive experimentation on a standard dataset is presented.",
"title": ""
},
{
"docid": "c431736bff4a9ff7ba4a7eac6985d963",
"text": "BACKGROUND\nIn clinical research, randomized controlled trials (RCTs) are the best way to study the safety and efficacy of new treatments. RCTs are used to answer patient-related questions and are required by governmental regulatory bodies as the basis for approval decisions.\n\n\nMETHODS\nTo help readers understand and evaluate RCTs, we discuss the methods and qualitative requirements of RCTs with reference to the literature and an illustrative case study. The discussion here corresponds to expositions of the subject that can be found in many textbooks but also reflects the authors' personal experience in planning, conducting and analyzing RCTs.\n\n\nRESULTS\nThe quality of an RCT depends on an appropriate study question and study design, the prevention of systematic errors, and the use of proper analytical techniques. All of these aspects must be attended to in the planning, conductance, analysis, and reporting of RCTs. RCTs must also meet ethical and legal requirements.\n\n\nCONCLUSION\nRCTs cannot yield reliable data unless they are planned, conducted, analyzed, and reported in ways that are methodologically sound and appropriate to the question being asked. The quality of any RCT must be critically evaluated before its relevance to patient care can be considered.",
"title": ""
},
{
"docid": "6fdd0c7d239417234cfc4706a82b5a0f",
"text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.",
"title": ""
},
{
"docid": "9ff9732a71ab0ac540fee31ad4af40a2",
"text": "The Internet of Things (IoT) is undeniably transforming the way that organizations communicate and organize everyday businesses and industrial procedures. Its adoption has proven well suited for sectors that manage a large number of assets and coordinate complex and distributed processes. This survey analyzes the great potential for applying IoT technologies (i.e., data-driven applications or embedded automation and intelligent adaptive systems) to revolutionize modern warfare and provide benefits similar to those in industry. It identifies scenarios where Defense and Public Safety (PS) could leverage better commercial IoT capabilities to deliver greater survivability to the warfighter or first responders, while reducing costs and increasing operation efficiency and effectiveness. This article reviews the main tactical requirements and the architecture, examining gaps and shortcomings in existing IoT systems across the military field and mission-critical scenarios. The review characterizes the open challenges for a broad deployment and presents a research roadmap for enabling an affordable IoT for defense and PS.",
"title": ""
},
{
"docid": "046207a87b7b01f6bc12f08a195670b9",
"text": "Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization.",
"title": ""
},
{
"docid": "07348109c7838032850c039f9a463943",
"text": "Ceramics are widely used biomaterials in prosthetic dentistry due to their attractive clinical properties. They are aesthetically pleasing with their color, shade and luster, and they are chemically stable. The main constituents of dental ceramic are Si-based inorganic materials, such as feldspar, quartz, and silica. Traditional feldspar-based ceramics are also referred to as “Porcelain”. The crucial difference between a regular ceramic and a dental ceramic is the proportion of feldspar, quartz, and silica contained in the ceramic. A dental ceramic is a multiphase system, i.e. it contains a dispersed crystalline phase surrounded by a continuous amorphous phase (a glassy phase). Modern dental ceramics contain a higher proportion of the crystalline phase that significantly improves the biomechanical properties of ceramics. Examples of these high crystalline ceramics include lithium disilicate and zirconia.",
"title": ""
},
{
"docid": "cb98fd6c850d9b3d9a2bac638b9f632d",
"text": "Artificial immune systems are a collection of algorithms inspired by the human immune system. Over the past 15 years, extensive research has been performed regarding the application of artificial immune systems to computer security. However, existing immune-inspired techniques have not performed as well as expected when applied to the detection of intruders in computer systems. In this thesis the development of the Dendritic Cell Algorithm is described. This is a novel immune-inspired algorithm based on the function of the dendritic cells of the human immune system. In nature, dendritic cells function as natural anomaly detection agents, instructing the immune system to respond if stress or damage is detected. Dendritic cells are a crucial cell in the detection and combination of ‘signals’ which provide the immune system with a sense of context. The Dendritic Cell Algorithm is based on an abstract model of dendritic cell behaviour, with the abstraction process performed in close collaboration with immunologists. This algorithm consists of components based on the key properties of dendritic cell behaviour, which involves data fusion and correlation components. In this algorithm, four categories of input signal are used. The resultant algorithm is formally described in this thesis and is validated on a standard machine learning dataset. The validation process shows that the Dendritic Cell Algorithm can be applied to static datasets and suggests that the algorithm is suitable for the analysis of time-dependent data. Further analysis and evaluation of the Dendritic Cell Algorithm is performed. This is assessed through the algorithm’s application to the detection of anomalous port scans. The results of this investigation show that the Dendritic Cell Algorithm can be applied to detection problems in real-time. This analysis also shows that detection with this algorithm produces high rates of false positives and high rates of true positives, in addition to being robust against modification to system parameters. The limitations of the Dendritic Cell Algorithm are also evaluated and presented, including loss of sensitivity and the generation of false positives under certain circumstances. It is shown that the Dendritic Cell Algorithm can perform well as an anomaly detection algorithm and can be applied to real-world, realtime data.",
"title": ""
},
{
"docid": "3fac31e0592c23c4c2f3aba942389fde",
"text": "This paper proposes a method formodelling and simulation of photovoltaic arrays. The method is used to obtain the parameters of the array model using its datasheet information. To reduce computational time, the input parameters are reduced to four and the values of shunt resistance Rp and series resistance Rs are estimated by simulated annealing optimization method. Then we draw I-V and P-V curves at different irradiance levels. Lowcomplexityanalogue MPPT circuit can be developedby usingtwo voltage approximation lines (VALs) that approximate the maximum power point (MPP) locus.In this paper, a fast and low cost analog MPPT method for low power PVsystems is proposed.The Simulation results coincide with experimental results at different PV systems to validate the powerful of the proposed method.",
"title": ""
},
{
"docid": "2b89021776b9c2be56a624ea401be99e",
"text": "Massive open online courses (MOOCs) are now being used across the world to provide millions of learners with access to education. Many learners complete these courses successfully, or to their own satisfaction, but the high numbers who do not finish remain a subject of concern for platform providers and educators. In 2013, a team from Stanford University analysed engagement patterns on three MOOCs run on the Coursera platform. They found four distinct patterns of engagement that emerged from MOOCs based on videos and assessments. However, not all platforms take this approach to learning design. Courses on the FutureLearn platform are underpinned by a social-constructivist pedagogy, which includes discussion as an important element. In this paper, we analyse engagement patterns on four FutureLearn MOOCs and find that only two clusters identified previously apply in this case. Instead, we see seven distinct patterns of engagement: Samplers, Strong Starters, Returners, Mid-way Dropouts, Nearly There, Late Completers and Keen Completers. This suggests that patterns of engagement in these massive learning environments are influenced by decisions about pedagogy. We also make some observations about approaches to clustering in this context.",
"title": ""
},
{
"docid": "53a49412d75190357df5d159b11843f0",
"text": "Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence. However, in current machine learning systems, the perception and reasoning modules are incompatible. Tasks requiring joint perception and reasoning ability are difficult to accomplish autonomously and still demand human intervention. Inspired by the way language experts decoded Mayan scripts by joining two abilities in an abductive manner, this paper proposes the abductive learning framework. The framework learns perception and reasoning simultaneously with the help of a trial-and-error abductive process. We present the Neural-Logical Machine as an implementation of this novel learning framework. We demonstrate thatusing human-like abductive learningthe machine learns from a small set of simple hand-written equations and then generalizes well to complex equations, a feat that is beyond the capability of state-of-the-art neural network models. The abductive learning framework explores a new direction for approaching human-level learning ability.",
"title": ""
},
{
"docid": "7b36abede1967f89b79975883074a34d",
"text": "In this paper, we introduce a generalized value iteration network (GVIN), which is an end-to-end neural network planning module. GVIN emulates the value iteration algorithm by using a novel graph convolution operator, which enables GVIN to learn and plan on irregular spatial graphs. We propose three novel differentiable kernels as graph convolution operators and show that the embedding-based kernel achieves the best performance. Furthermore, we present episodic Q-learning, an improvement upon traditional n-step Q-learning that stabilizes training for VIN and GVIN. Lastly, we evaluate GVIN on planning problems in 2D mazes, irregular graphs, and realworld street networks, showing that GVIN generalizes well for both arbitrary graphs and unseen graphs of larger scale and outperforms a naive generalization of VIN (discretizing a spatial graph into a 2D image).",
"title": ""
}
] |
scidocsrr
|
26bfe9c5f377606983e3f38dba0c153d
|
A Novel Grain-Oriented Lamination Rotor Core Assembly for a Synchronous Reluctance Traction Motor With a Reduced Torque Ripple Algorithm
|
[
{
"docid": "70bee569e694c92b79bd5e7dc586cbdc",
"text": "Synchronous reluctance machines (SynRM) have been used widely in industries for instance, in ABB's new VSD product package based on SynRM technology. It is due to their unique merits such as high efficiency, fast dynamic response, and low cost. However, considering the major requirements for traction applications such as high torque and power density, low torque ripple, wide speed range, proper size, and capability of meeting a specific torque envelope, this machine is still under investigation to be developed for traction applications. Since the choice of motor for traction is generally determined by manufacturers with respect to three dominant factors: cost, weight, and size, the SynRM can be considered a strong alternative due to its high efficiency and lower cost. Hence, the machine's proper size estimation is a major step of the design process before attempting the rotor geometry design. This is crucial in passenger vehicles in which compactness is a requirement and the size and weight are indeed the design limitations. This paper presents a methodology for sizing a SynRM. The electric and magnetic parameters of the proposed machine in conjunction with the core dimensions are calculated. Then, the proposed method's validity and evaluation are done using FE analysis.",
"title": ""
},
{
"docid": "906ad369cd2c4839edbaa10698a78301",
"text": "Three different motor drives for electric traction are compared, in terms of output power and efficiency at the same stack dimensions and inverter size. Induction motor (IM), surface-mounted permanent-magnet (PM) (SPM), and interior PM (IPM) synchronous motor drives are investigated, with reference to a common vehicle specification. The IM is penalized by the cage loss, but it is less expensive and inherently safe in case of inverter unwilled turnoff due to natural de-excitation. The SPM motor has a simple construction and shorter end connections, but it is penalized by eddy-current loss at high speed, has a very limited transient overload power, and has a high uncontrolled generator voltage. The IPM motor shows the better performance compromise, but it might be more complicated to be manufactured. Analytical relationships are first introduced and then validated on three example designs and finite element calculated, accounting for core saturation, harmonic losses, the effects of skewing, and operating temperature. The merits and limitations of the three solutions are quantified comprehensively and summarized by the calculation of the energy consumption over the standard New European Driving Cycle.",
"title": ""
}
] |
[
{
"docid": "fcd3eb613db484d7d2bd00a03e5192bc",
"text": "A design methodology by including the finite PSR of the error amplifier to improve the low frequency PSR of the Low dropout regulator with improved voltage subtractor circuit is proposed. The gm/ID method based on exploiting the all regions of operation of the MOS transistor is utilized for the design of LDO regulator. The PSR of the LDO regulator is better than -50dB up to 10MHz frequency for the load currents up to 20mA with 0.15V drop-out voltage. A comparison is made between different schematics of the LDO regulator and proposed methodology for the LDO regulator with improved voltage subtractor circuit. Low frequency PSR of the regulator can be significantly improved with proposed methodology.",
"title": ""
},
{
"docid": "085dfb30158e1adf75bfa95faf627492",
"text": "In this paper, we design a dual-polarization corporate-feed waveguide 32×32-slot array antenna for 120 GHz band in order to increase the operating frequency band and the gain. As a result of the simulation by HFSS, the bandwidth for VSWR less than 1.5 at the design frequency of 125 GHz is 7.6% (S11) and 8.8% (S22), respectively. The isolation is more than 50 dB over the above bandwidth. The simulated realized gain for both polarizations is 38.5 dBi at 125 GHz with the antenna efficiency of 85%.",
"title": ""
},
{
"docid": "cf30e30d7683fd2b0dec2bd6cc354620",
"text": "As online courses such as MOOCs become increasingly popular, there has been a dramatic increase for the demand for methods to facilitate this type of organisation. While resources for new courses are often freely available, they are generally not suitably organised into easily manageable units. In this paper, we investigate how state-of-the-art topic segmentation models can be utilised to automatically transform unstructured text into coherent sections, which are suitable for MOOCs content browsing. The suitability of this method with regards to course organisation is confirmed through experiments with a lecture corpus, configured explicitly according to MOOCs settings. Experimental results demonstrate the reliability and scalability of this approach over various academic disciplines. The findings also show that the topic segmentation model which used discourse cues displayed the best results overall.",
"title": ""
},
{
"docid": "ca74dda60d449933ff72d14fe5c7493c",
"text": "We introduce a novel training principle for generative probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework generalizes Denoising Auto-Encoders (DAE) and is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution is a conditional distribution that generally involves a small move, so it has fewer dominant modes and is unimodal in the limit of small moves. This simplifies the learning problem, making it less like density estimation and more akin to supervised function approximation, with gradients that can be obtained by backprop. The theorems provided here provide a probabilistic interpretation for denoising autoencoders and generalize them; seen in the context of this framework, auto-encoders that learn with injected noise are a special case of GSNs and can be interpreted as generative models. The theorems also provide an interesting justification for dependency networks and generalized pseudolikelihood and define an appropriate joint distribution and sampling mechanism, even when the conditionals are not consistent. GSNs can be used with missing inputs and can be used to sample subsets of variables given the rest. Experiments validating these theoretical results are conducted on both synthetic datasets and image datasets. The experiments employ a particular architecture that mimics the Deep Boltzmann Machine Gibbs sampler but that allows training to proceed with backprop through a recurrent neural network with noise injected inside and without the need for layerwise pretraining.",
"title": ""
},
{
"docid": "dca9a39a9fdf69825ab37196a8b8acea",
"text": "We contrast two seemingly distinct approaches to the task of question answering (QA) using Freebase: one based on information extraction techniques, the other on semantic parsing. Results over the same test-set were collected from two state-ofthe-art, open-source systems, then analyzed in consultation with those systems’ creators. We conclude that the differences between these technologies, both in task performance, and in how they get there, is not significant. This suggests that the semantic parsing community should target answering more compositional open-domain questions that are beyond the reach of more direct information extraction methods.",
"title": ""
},
{
"docid": "79ca455db7e7348000c6590a442f9a4c",
"text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis upon flap systems. It discusses existing electro-hydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance and life cycle costs. The paper then progresses to describe a full scale actuation demonstrator of the flap system, including the high speed electrical drive, step down gearbox and flaps. Detailed descriptions are given of the fault tolerant motor, power electronics, control architecture and position sensor systems, along with a range of test results, demonstrating the system in operation",
"title": ""
},
{
"docid": "3724a800d0c802203835ef9f68a87836",
"text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.",
"title": ""
},
{
"docid": "3744970293b3ed4c4543e6f2313fe2e4",
"text": "With the proliferation of GPS-enabled smart devices and increased availability of wireless network, spatial crowdsourcing (SC) has been recently proposed as a framework to automatically request workers (i.e., smart device carriers) to perform location-sensitive tasks (e.g., taking scenic photos, reporting events). In this paper we study a destination-aware task assignment problem that concerns the optimal strategy of assigning each task to proper worker such that the total number of completed tasks can be maximized whilst all workers can reach their destinations before deadlines after performing assigned tasks. Finding the global optimal assignment turns out to be an intractable problem since it does not imply optimal assignment for individual worker. Observing that the task assignment dependency only exists amongst subsets of workers, we utilize tree-decomposition technique to separate workers into independent clusters and develop an efficient depth-first search algorithm with progressive bounds to prune non-promising assignments. Our empirical studies demonstrate that our proposed technique is quite effective and settle the problem nicely.",
"title": ""
},
{
"docid": "feb57c831158e03530d59725ae23af00",
"text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine the success of MTL. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary task configurations, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, because significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.",
"title": ""
},
{
"docid": "77c7f144c63df9022434313cfe2e5290",
"text": "Today the prevalence of online banking is enormous. People prefer to accomplish their financial transactions through the online banking services offered by their banks. This method of accessing is more convenient, quicker and secured. Banks are also encouraging their customers to opt for this mode of e-banking facilities since that result in cost savings for the banks and there is better customer satisfaction. An important aspect of online banking is the precise authentication of users before allowing them to access their accounts. Typically this is done by asking the customers to enter their unique login id and password combination. The success of this authentication relies on the ability of customers to maintain the secrecy of their passwords. Since the customer login to the banking portals normally occur in public environments, the passwords are prone to key logging attacks. To avoid this, virtual keyboards are provided. But virtual keyboards are vulnerable to shoulder surfing based attacks. In this paper, a secured virtual keyboard scheme that withstands such attacks is proposed. Elaborate user studies carried out on the proposed scheme have testified the security and the usability of the proposed approach.",
"title": ""
},
{
"docid": "ce1d25b3d2e32f903ce29470514abcce",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "1e2767ace7b4d9f8ca2a5eee21684240",
"text": "Modern data analytics applications typically process massive amounts of data on clusters of tens, hundreds, or thousands of machines to support near-real-time decisions.The quantity of data and limitations of disk and memory bandwidth often make it infeasible to deliver answers at interactive speeds. However, it has been widely observed that many applications can tolerate some degree of inaccuracy. This is especially true for exploratory queries on data, where users are satisfied with \"close-enough\" answers if they can come quickly. A popular technique for speeding up queries at the cost of accuracy is to execute each query on a sample of data, rather than the whole dataset. To ensure that the returned result is not too inaccurate, past work on approximate query processing has used statistical techniques to estimate \"error bars\" on returned results. However, existing work in the sampling-based approximate query processing (S-AQP) community has not validated whether these techniques actually generate accurate error bars for real query workloads. In fact, we find that error bar estimation often fails on real world production workloads. Fortunately, it is possible to quickly and accurately diagnose the failure of error estimation for a query. In this paper, we show that it is possible to implement a query approximation pipeline that produces approximate answers and reliable error bars at interactive speeds.",
"title": ""
},
{
"docid": "61282d5ef37e5821a5a856f0bbe26cc2",
"text": "Second language teachers are great consumers of grammar. They are mainly interested in pedagogical grammar, but they are generally unaware of the work of theoretical linguists, such as Chomsky and Halliday. Whereas Chomsky himself has never suggested in any way that his work might be of benefit to L2 teaching, Halliday and his many disciples, have. It seems odd that language teachers should choose to ignore the great gurus of grammar. Even if their work is deemed too technical and theoretical for classroom application, it may still shed light on pedagogical grammar and provide a rationale for the way one goes about teaching grammar. In order to make informed decisions about what grammar to teach and how best to teach it, one should take stock of the various schools of grammar that seem to speak in very different voices. In the article, the writer outlines the kinds of grammar that come out of five of these schools, and assesses their usefulness to the L2 teacher.",
"title": ""
},
{
"docid": "fd7b6e587e8fb84083b53a44359d1cc2",
"text": "The literature contains several reports evaluating the abilities of deep neural networks in text transfer learning. To our knowledge, however, there have been few efforts to fully realize the potential of deep neural networks in cross-domain product review sentiment classification. In this paper, we propose a two-layer convolutional neural network (CNN) for cross-domain product review sentiment classification (LM-CNN-LB). Transfer learning research into product review sentiment classification based on deep neural networks has been limited by the lack of a large-scale corpus; we sought to remedy this problem using a large-scale auxiliary cross-domain dataset collected from Amazon product reviews. Our proposed framework exhibits the dramatic transferability of deep neural networks for cross-domain product review sentiment classification and achieves state-of-the-art performance. The framework also outperforms complex engineered features used with a non-deep neural network method. The experiments demonstrate that introducing large-scale data from similar domains is an effective way to resolve the lack of training data. The LM-CNN-LB trained on the multi-source related domain dataset outperformed the one trained on a single similar domain.",
"title": ""
},
{
"docid": "eb17d97b32db0682d10dfef2ab0c2902",
"text": "Previous studies have suggested that solar-induced chlorophyll fluorescence (SIF) is correlated with Gross Primary Production (GPP). However, it remains unclear to what extent this relationship is due to absorbed photosynthetically active radiation (APAR) and/or light use efficiency (LUE). Here we present the first time series of near-surface measurement of canopy-scale SIF at 760 nm in temperate deciduous forests. SIF correlated with GPP estimated with eddy covariance at diurnal and seasonal scales (r = 0.82 and 0.73, respectively), as well as with APAR diurnally and seasonally (r = 0.90 and 0.80, respectively). SIF/APAR is significantly positively correlated with LUE and is higher during cloudy days than sunny days. Weekly tower-based SIF agreed with SIF from the Global Ozone Monitoring Experiment-2 (r = 0.82). Our results provide ground-based evidence that SIF is directly related to both APAR and LUE and thus GPP, and confirm that satellite SIF can be used as a proxy for GPP.",
"title": ""
},
{
"docid": "bcab7b2f12f72c6db03446046586381e",
"text": "The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy are actively researched, there is still little focus on detective controls related to cloud accountability and audit ability. The complexity resulting from large-scale virtualization and data distribution carried out in current clouds has revealed an urgent research agenda for cloud accountability, as has the shift in focus of customer concerns from servers to data. This paper discusses key issues and challenges in achieving a trusted cloud through the use of detective controls, and presents the Trust Cloud framework, which addresses accountability in cloud computing via technical and policy-based approaches.",
"title": ""
},
{
"docid": "dbf8e0125944b526f7b14c98fc46afa2",
"text": "People counting is one of the key techniques in video surveillance. This task usually encounters many challenges in crowded environment, such as heavy occlusion, low resolution, imaging viewpoint variability, etc. Motivated by the success of R-CNN [1] on object detection, in this paper we propose a head detection based people counting method combining the Adaboost algorithm and the CNN. Unlike the R-CNN which uses the general object proposals as the inputs of CNN, our method uses the cascade Adaboost algorithm to obtain the head region proposals for CNN, which can greatly reduce the following classification time. Resorting to the strong ability of feature learning of the CNN, it is used as a feature extractor in this paper, instead of as a classifier as its commonlyused strategy. The final classification is done by a linear SVM classifier trained on the features extracted using the CNN feature extractor. Finally, the prior knowledge can be applied to post-process the detection results to increase the precision of head detection and the people count is obtained by counting the head detection results. A real classroom surveillance dataset is used to evaluate the proposed method and experimental results show that this method has good performance and outperforms the baseline methods, including deformable part model and cascade Adaboost methods. ∗Corresponding author Email address: gaocq@cqupt.edu.cn (Chenqiang Gao∗, Pei Li, Yajun Zhang, Jiang Liu, Lan Wang) Preprint submitted to Neurocomputing May 28, 2016",
"title": ""
},
{
"docid": "066092579449288a6f83c014c36238ee",
"text": "Reflective practice is one of the most popular theories of professional knowledge in the last 20 years and has been widely adopted by nursing, health, and social care professions. The term was coined by Donald Schön in his influential books The Reflective Practitioner, and Educating the Reflective Practitioner, and has garnered the unprecedented attention of theorists and practitioners of professional education and practice. Reflective practice has been integrated into professional preparatory programmes, continuing education programmes, and by the regulatory bodies of a wide range of health and social care professions. Yet, despite its popularity and widespread adoption, a problem frequently raised in the literature concerns the lack of conceptual clarity surrounding the term reflective practice. This paper seeks to respond to this problem by offering an analysis of the epistemology of reflective practice as revealed through a critical examination of philosophical influences within the theory. The aim is to discern philosophical underpinnings of reflective practice in order to advance increasingly coherent interpretations, and to consider the implications for conceptions of professional knowledge in professional life. The paper briefly examines major philosophical underpinnings in reflective practice to explicate central themes that inform the epistemological assumptions of the theory. The study draws on the work of Donald Schön, and on texts from four philosophers: John Dewey, Nelson Goodman, Michael Polanyi, and Gilbert Ryle. Five central epistemological themes in reflective practice are illuminated: (1) a broad critique of technical rationality; (2) professional practice knowledge as artistry; (3) constructivist assumptions in the theory; (4) the significance of tacit knowledge for professional practice knowledge; and (5) overcoming mind body dualism to recognize the knowledge revealed in intelligent action. The paper reveals that the theory of reflective practice is concerned with deep epistemological questions of significance to conceptions of knowledge in health and social care professions.",
"title": ""
},
{
"docid": "ea0cf1ed687d6a3e358abc2b33404da2",
"text": "Emerging mega-trends (e.g., mobile, social, cloud, and big data) in information and communication technologies (ICT) are commanding new challenges to future Internet, for which ubiquitous accessibility, high bandwidth, and dynamic management are crucial. However, traditional approaches based on manual configuration of proprietary devices are cumbersome and error-prone, and they cannot fully utilize the capability of physical network infrastructure. Recently, software-defined networking (SDN) has been touted as one of the most promising solutions for future Internet. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. As a result, SDN is positioned to provide more efficient configuration, better performance, and higher flexibility to accommodate innovative network designs. This paper surveys latest developments in this active research area of SDN. We first present a generally accepted definition for SDN with the aforementioned two characteristic features and potential benefits of SDN. We then dwell on its three-layer architecture, including an infrastructure layer, a control layer, and an application layer, and substantiate each layer with existing research efforts and its related research areas. We follow that with an overview of the de facto SDN implementation (i.e., OpenFlow). Finally, we conclude this survey paper with some suggested open research challenges.",
"title": ""
},
{
"docid": "dd4b6be4c6eb473f27643f21edf328e4",
"text": "Due to the large size that fingerprint databases generally have, the reduction of the search space is indispensable. In the resolution of this problem, indexing algorithms have a fundamental role. In the literature, there are several proposals that make use of different features to characterize fingerprints. In addition, a wide variety of recovery methods are reported. This paper concisely describes the indexing algorithms that have reported better results so far and makes a comparison between these, based on experiments in well known databases. Finally, a classification of the indexing algorithms is proposed, based on some general characteristics.",
"title": ""
}
] |
scidocsrr
|
ae5adc99aab961a670843dfb839befb6
|
Collaborative creativity: a complex systems model with distributed affect
|
[
{
"docid": "4f5272a35c9991227a6d098209de8d6c",
"text": "This is an investigation of \" Online Creativity. \" I will present a new account of the cognitive and social mechanisms underlying complex thinking of creative scientists as they work on significant problems in contemporary science. I will lay out an innovative methodology that I have developed for investigating creative and complex thinking in a real-world context. Using this method, I have discovered that there are a number of strategies that are used in contemporary science that increase the likelihood of scientists making discoveries. The findings reported in this chapter provide new insights into complex scientific thinking and will dispel many of the myths surrounding the generation of new concepts and scientific discoveries. InVivo cognition: A new way of investigating cognition There is a large background in cognitive research on thinking, reasoning and problem solving processes that form the foundation for creative cognition (see Dunbar, in press, Holyoak 1996 for recent reviews). However, to a large extent, research on reasoning has demonstrated that subjects in psychology experiments make vast numbers of thinking and reasoning errors even in the most simple problems. How is creative thought even possible if people make so many reasoning errors? One problem with research on reasoning is that the concepts and stimuli that the subjects are asked to use are often arbitrary and involve no background knowledge (cf. Dunbar, 1995; Klahr & Dunbar, 1988). I have proposed that one way of determining what reasoning errors are specific and which are general is to investigate cognition in the cognitive laboratory and the real world (Dunbar, 1995). Psychologists should conduct both InVitro and InVivo research to understand thinking. InVitro research is the standard psychological experiment where subjects are brought into the laboratory and controlled experiments are conducted. As can be seen from the research reported in this volume, this approach yields many insights into the psychological mechanisms underlying complex thinking. The use of an InVivo methodology in which online thinking and reasoning are investigated in a real-world context yields fundamental insights into the basic cognitive mechanisms underlying complex cognition and creativity. The results of InVivo cognitive research can then be used as a basis for further InVitro work in which controlled experiments are conducted. In this chapter, I will outline some of the results of my ongoing InVivo research on creative scientific thinking and relate this research back to the more common InVitro research and show that the …",
"title": ""
},
{
"docid": "8d3c1e649e40bf72f847a9f8ac6edf38",
"text": "Many organizations are forming “virtual teams” of geographically distributed knowledge workers to collaborate on a variety of workplace tasks. But how effective are these virtual teams compared to traditional face-to-face groups? Do they create similar teamwork and is information exchanged as effectively? An exploratory study of a World Wide Web-based asynchronous computer conference system known as MeetingWebTM is presented and discussed. It was found that teams using this computer-mediated communication system (CMCS) could not outperform traditional (face-to-face) teams under otherwise comparable circumstances. Further, relational links among team members were found to be a significant contributor to the effectiveness of information exchange. Though virtual and face-to-face teams exhibit similar levels of communication effectiveness, face-to-face team members report higher levels of satisfaction. Therefore, the paper presents steps that can be taken to improve the interaction experience of virtual teams. Finally, guidelines for creating and managing virtual teams are suggested, based on the findings of this research and other authoritative sources. Subject Areas: Collaboration, Computer Conference, Computer-mediated Communication Systems (CMCS), Internet, Virtual Teams, and World Wide Web. *The authors wish to thank the Special Focus Editor and the reviewers for their thoughtful critique of the earlier versions of this paper. We also wish to acknowledge the contributions of the Northeastern University College of Business Administration and its staff, which provided the web server and the MeetingWebTM software used in these experiments.",
"title": ""
}
] |
[
{
"docid": "f82ce890d66c746a169a38fdad702749",
"text": "The following review paper presents an overview of the current crop yield forecasting methods and early warning systems for the global strategy to improve agricultural and rural statistics across the globe. Different sections describing simulation models, remote sensing, yield gap analysis, and methods to yield forecasting compose the manuscript. 1. Rationale Sustainable land management for crop production is a hierarchy of systems operating in— and interacting with—economic, ecological, social, and political components of the Earth. This hierarchy ranges from a field managed by a single farmer to regional, national, and global scales where policies and decisions influence crop production, resource use, economics, and ecosystems at other levels. Because sustainability concepts must integrate these diverse issues, agricultural researchers who wish to develop sustainable productive systems and policy makers who attempt to influence agricultural production are confronted with many challenges. A multiplicity of problems can prevent production systems from being sustainable; on the other hand, with sufficient attention to indicators of sustainability, a number of practices and policies could be implemented to accelerate progress. Indicators to quantify changes in crop production systems over time at different hierarchical levels are needed for evaluating the sustainability of different land management strategies. To develop and test sustainability concepts and yield forecast methods globally, it requires the implementation of long-term crop and soil management experiments that include measurements of crop yields, soil properties, biogeochemical fluxes, and relevant socioeconomic indicators. Long-term field experiments cannot be conducted with sufficient detail in space and time to find the best land management practices suitable for sustainable crop production. Crop and soil simulation models, when suitably tested in reasonably diverse space and time, provide a critical tool for finding combinations of management strategies to reach multiple goals required for sustainable crop production. The models can help provide land managers and policy makers with a tool to extrapolate experimental results from one location to others where there is a lack of response information. Agricultural production is significantly affected by environmental factors. Weather influences crop growth and development, causing large intra-seasonal yield variability. In addition, spatial variability of soil properties, interacting with the weather, cause spatial yield variability. Crop agronomic management (e.g. planting, fertilizer application, irrigation, tillage, and so on) can be used to offset the loss in yield due to effects of weather. As a result, yield forecasting represents an important tool for optimizing crop yield and to evaluate the crop-area insurance …",
"title": ""
},
{
"docid": "d79d6dd8267c66ad98f33bd54ff68693",
"text": "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.",
"title": ""
},
{
"docid": "3f6fcee0073e7aaf587602d6510ed913",
"text": "BACKGROUND\nTreatment of early onset scoliosis (EOS) is challenging. In many cases, bracing will not be effective and growing rod surgery may be inappropriate. Serial, Risser casts may be an effective intermediate method of treatment.\n\n\nMETHODS\nWe studied 20 consecutive patients with EOS who received serial Risser casts under general anesthesia between 1999 and 2011. Analyses included diagnosis, sex, age at initial cast application, major curve severity, initial curve correction, curve magnitude at the time of treatment change or latest follow-up for those still in casts, number of casts per patient, the type of subsequent treatment, and any complications.\n\n\nRESULTS\nThere were 8 patients with idiopathic scoliosis, 6 patients with neuromuscular scoliosis, 5 patients with syndromic scoliosis, and 1 patient with skeletal dysplasia. Fifteen patients were female and 5 were male. The mean age at first cast was 3.8±2.3 years (range, 1 to 8 y), and the mean major curve magnitude was 74±18 degrees (range, 40 to 118 degrees). After initial cast application, the major curve measured 46±14 degrees (range, 25 to 79 degrees). At treatment change or latest follow-up for those still in casts, the major curve measured 53±24 degrees (range, 13 to 112 degrees). The mean time in casts was 16.9±9.1 months (range, 4 to 35 mo). The mean number of casts per patient was 4.7±2.2 casts (range, 1 to 9 casts). At the time of this study, 7 patients had undergone growing rod surgery, 6 patients were still undergoing casting, 5 returned to bracing, and 2 have been lost to follow-up. Four patients had minor complications: 2 patients each with superficial skin irritation and cast intolerance.\n\n\nCONCLUSIONS\nSerial Risser casting is a safe and effective intermediate treatment for EOS. It can stabilize relatively large curves in young children and allows the child to reach a more suitable age for other forms of treatment, such as growing rods.\n\n\nLEVEL OF EVIDENCE\nLevel IV; case series.",
"title": ""
},
{
"docid": "4fa68f011f7cb1b4874dd4b10070be17",
"text": "This paper demonstrates the development of ontology for satellite databases. First, I create a computational ontology for the Union of Concerned Scientists (UCS) Satellite Database (UCSSD for short), called the UCS Satellite Ontology (or UCSSO). Second, in developing UCSSO I show that The Space Situational Awareness Ontology (SSAO)-—an existing space domain reference ontology—-and related ontology work by the author (Rovetto 2015, 2016) can be used either (i) with a database-specific local ontology such as UCSSO, or (ii) in its stead. In case (i), local ontologies such as UCSSO can reuse SSAO terms, perform term mappings, or extend it. In case (ii), the author_s orbital space ontology work, such as the SSAO, is usable by the UCSSD and organizations with other space object catalogs, as a reference ontology suite providing a common semantically-rich domain model. The SSAO, UCSSO, and the broader Orbital Space Environment Domain Ontology project is online at https://purl.org/space-ontology and GitHub. This ontology effort aims, in part, to provide accurate formal representations of the domain for various applications. Ontology engineering has the potential to facilitate the sharing and integration of satellite data from federated databases and sensors for safer spaceflight.",
"title": ""
},
{
"docid": "28cbdb82603c720efba6880034344b94",
"text": "An experiment is reported which tests Fazey & Hardy's (1988) catastrophe model of anxiety and performance. Eight experienced basketball players were required to perform a set shooting task, under conditions of high and low cognitive anxiety. On each of these occasions, physiological arousal was manipulated by means of physical work in such a way that subjects were tested with physiological arousal increasing and decreasing. Curve-fitting procedures followed by non-parametric tests of significance confirmed (p less than .002) Fazey & Hardy's hysteresis hypothesis: namely, that the polynomial curves for the increasing vs. decreasing arousal conditions would be horizontally displaced relative to each other in the high cognitive anxiety condition, but superimposed on top of one another in the low cognitive anxiety condition. Other non-parametric procedures showed that subjects' maximum performances were higher, their minimum performances lower, and their critical decrements in performance greater in the high cognitive anxiety condition than in the low cognitive anxiety condition. These results were taken as strong support for Fazey & Hardy's catastrophe model of anxiety and performance. The implications of the model for current theorizing on the anxiety-performance relationship are also discussed.",
"title": ""
},
{
"docid": "37d77131c6100aceb4a4d49a5416546f",
"text": "Automated medical image analysis has a significant value in diagnosis and treatment of lesions. Brain tumors segmentation has a special importance and difficulty due to the difference in appearances and shapes of the different tumor regions in magnetic resonance images. Additionally the data sets are heterogeneous and usually limited in size in comparison with the computer vision problems. The recently proposed adversarial training has shown promising results in generative image modeling. In this paper we propose a novel end-to-end trainable architecture for brain tumor semantic segmentation through conditional adversarial training. We exploit conditional Generative Adversarial Network (cGAN) and train a semantic segmentation Convolution Neural Network (CNN) along with an adversarial network that discriminates segmentation maps coming from the ground truth or from the segmentation network for BraTS 2017 segmentation task[15,4,2,3]. We also propose an end-to-end trainable CNN for survival day prediction based on deep learning techniques for BraTS 2017 prediction task [15,4,2,3]. The experimental results demonstrate the superior ability of the proposed approach for both tasks. The proposed model achieves on validation data a DICE score, Sensitivity and Specificity respectively 0.68, 0.99 and 0.98 for the whole tumor, regarding online judgment system.",
"title": ""
},
{
"docid": "f7535a097b65dccf1ee8e615244d98c5",
"text": "Wireless power transfer via magnetic resonant coupling is experimentally demonstrated in a system with a large source coil and either one or two small receivers. Resonance between source and load coils is achieved with lumped capacitors terminating the coils. A circuit model is developed to describe the system with a single receiver, and extended to describe the system with two receivers. With parameter values chosen to obtain good fits, the circuit models yield transfer frequency responses that are in good agreement with experimental measurements over a range of frequencies that span the resonance. Resonant frequency splitting is observed experimentally and described theoretically for the multiple receiver system. In the single receiver system at resonance, more than 50% of the power that is supplied by the actual source is delivered to the load. In a multiple receiver system, a means for tracking frequency shifts and continuously retuning the lumped capacitances that terminate each receiver coil so as to maximize efficiency is a key issue for future work.",
"title": ""
},
{
"docid": "1090297224c76a5a2c4ade47cb932dba",
"text": "Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.",
"title": ""
},
{
"docid": "403becc6c79d81204493c3cacdd3ee4d",
"text": "Studies of protein nutrition and biochemistry require reliable methods for analysis of amino acid (AA) composition in polypeptides of animal tissues and foods. Proteins are hydrolyzed by 6M HCl (110°C for 24h), 4.2M NaOH (105°C for 20 h), or proteases. Analytical techniques that require high-performance liquid chromatography (HPLC) include pre-column derivatization with 4-chloro-7-nitrobenzofurazan, 9-fluorenyl methylchloroformate, phenylisothiocyanate, naphthalene-2,3-dicarboxaldehyde, 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate, and o-phthaldialdehyde (OPA). OPA reacts with primary AA (except cysteine or cystine) in the presence of 2-mercaptoethanol or 3-mercaptopropionic acid to form a highly fluorescent adduct. OPA also reacts with 4-amino-1-butanol and 4-aminobutane-1,3-diol produced from oxidation of proline and 4-hydroxyproline, respectively, in the presence of chloramine-T plus sodium borohydride at 60°C, or with S-carboxymethyl-cysteine formed from cysteine and iodoacetic acid at 25°C. Fluorescence of OPA derivatives is monitored at excitation and emission wavelengths of 340 and 455 nm, respectively. Detection limits are 50 fmol for AA. This technique offers the following advantages: simple procedures for preparation of samples, reagents, and mobile-phase solutions; rapid pre-column formation of OPA-AA derivatives and their efficient separation at room temperature (e.g., 20-25°C); high sensitivity of detection; easy automation on the HPLC apparatus; few interfering side reactions; a stable chromatography baseline for accurate integration of peak areas; and rapid regeneration of guard and analytical columns. Thus, the OPA method provides a useful tool to determine AA composition in proteins of animal tissues (e.g., skeletal muscle, liver, intestine, placenta, brain, and body homogenates) and foods (e.g., milk, corn grain, meat, and soybean meal).",
"title": ""
},
{
"docid": "3c3c30050b32b46c28abef3ecff06376",
"text": "The analysis of social, communication and information networks for identifying patterns, evolutionary characteristics and anomalies is a key problem for the military, for instance in the Intelligence community. Current techniques do not have the ability to discern unusual features or patterns that are not a priori known. We investigate the use of deep learning for network analysis. Over the last few years, deep learning has had unprecedented success in areas such as image classification, speech recognition, etc. However, research on the use of deep learning to network or graph analysis is limited. We present three preliminary techniques that we have developed as part of the ARL Network Science CTA program: (a) unsupervised classification using a very highly trained image recognizer, namely Caffe; (b) supervised classification using a variant of convolutional neural networks on node features such as degree and assortativity; and (c) a framework called node2vec for learning representations of nodes in a network using a mapping to natural language processing.",
"title": ""
},
{
"docid": "c9b7ddb6eb1431fcc508d29a1f25104b",
"text": "The problem of finding the missing values of a matrix given a few of its entries, called matrix completion, has gathered a lot of attention in the recent years. Although the problem under the standard low rank assumption is NP-hard, Candès and Recht showed that it can be exactly relaxed if the number of observed entries is sufficiently large. In this work, we introduce a novel matrix completion model that makes use of proximity information about rows and columns by assuming they form communities. This assumption makes sense in several real-world problems like in recommender systems, where there are communities of people sharing preferences, while products form clusters that receive similar ratings. Our main goal is thus to find a low-rank solution that is structured by the proximities of rows and columns encoded by graphs. We borrow ideas from manifold learning to constrain our solution to be smooth on these graphs, in order to implicitly force row and column proximities. Our matrix recovery model is formulated as a convex non-smooth optimization problem, for which a well-posed iterative scheme is provided. We study and evaluate the proposed matrix completion on synthetic and real data, showing that the proposed structured low-rank recovery model outperforms the standard matrix completion model in many situations.",
"title": ""
},
{
"docid": "f274322ad7eed4829945bc3d483ceecb",
"text": "In this paper, an observer problem from a computer vision application is studied. Rigid body pose estimation using inertial sensors and a monocular camera is considered and it is shown how rotation estimation can be decoupled from position estimation. Orientation estimation is formulated as an observer problem with implicit output where the states evolve on (3). A careful observability study reveals interesting group theoretic structures tied to the underlying system structure. A locally convergent observer where the states evolve on (3) is proposed and numerical estimates of the domain of attraction is given. Further, it is shown that, given convergent orientation estimates, position estimation can be formulated as a linear implicit output problem. From an applications perspective, it is outlined how delayed low bandwidth visual observations and high bandwidth rate gyro measurements can provide high bandwidth estimates. This is consistent with real-time constraints due to the complementary characteristics of the sensors which are fused in a multirate way.",
"title": ""
},
{
"docid": "aeb039a1e5ae76bf8e928e6b8cbfdf7f",
"text": "ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.",
"title": ""
},
{
"docid": "817d0da77bcdd0c695d2c064f5ed9f69",
"text": "Intuition-based learning (IBL) has been used in various problem-solving areas such as risk analysis, medical diagnosis and criminal investigation. However, conventional IBL has the limitation that it has no criterion for choosing the trusted intuition based on the knowledge and experience. The purpose of this paper is to develop a learning model for human-computer cooperative from user’s perspective. We have established the theoretical foundation and conceptualization of the constructs for learning system with trusted intuition. And suggest a new machine learning technique called Trusted Intuition Network (TIN). We have developed a general instrument capable of reliably and accurately measuring trusted intuition in the context of intuitive learning systems. We also compare the results with the learning methods, artificial intuition networks and conventional IBL. The results of this paper show that the proposed technique outperforms those of many other methods, it overcomes the limitation of conventional IBL, and it provides improved uncertainty learning theory.",
"title": ""
},
{
"docid": "bacd81a1074a877e0c943a6755290d34",
"text": "This thesis addresses the problem of scheduling multiple, concurrent, adaptively parallel jobs on a multiprogrammed shared-memory multiprocessor. Adaptively parallel jobs are jobs for which the number of processors that can be used without waste varies during execution. We focus on the specific case of parallel jobs that are scheduled using a randomized work-stealing algorithm, as is used in the Cilk multithreaded language. We begin by developing a theoretical model for two-level scheduling systems, or those in which the operating system allocates processors to jobs, and the jobs schedule their threads on the processors. To analyze the performance of a job scheduling algorithm, we model the operating system as an adversary. We show that a greedy scheduler achieves an execution time that is within a factor of 2 of optimal under these conditions. Guided by our model, we present a randomized work-stealing algorithm for adaptively parallel jobs, algorithm WSAP, which takes a unique approach to estimating the processor desire of a job. We show that attempts to directly measure a job’s instantaneous parallelism are inherently misleading. We also describe a dynamic processor-allocation algorithm, algorithm DP, that allocates processors to jobs in a fair and efficient way. Using these two algorithms, we present the design and implementation of Cilk-AP, a two-level scheduling system for adaptively parallel workstealing jobs. Cilk-AP is implemented by extending the runtime system of Cilk. We tested the Cilk-AP system on a shared-memory symmetric multiprocessor (SMP) with 16 processors. Our experiments show that, relative to the original Cilk system, Cilk-AP incurs negligible overhead and provides up to 37% improvement in throughput and 30% improvement in response time in typical multiprogramming scenarios. This thesis represents joint work with Charles Leiserson and Kunal Agrawal of the Supercomputing Technologies Group at MIT’s Computer Science and Artificial Intelligence Laboratory. Thesis Supervisor: Charles E. Leiserson Title: Professor",
"title": ""
},
{
"docid": "baa5eff969c4c81c863ec4c4c6ce7734",
"text": "The research describes a rapid method for the determination of fatty acid (FA) contents in a micro-encapsulated fish-oil (μEFO) supplement by using attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopic technique and partial least square regression (PLSR) analysis. Using the ATR-FTIR technique, the μEFO powder samples can be directly analysed without any pre-treatment required, and our developed PLSR strategic approach based on the acquired spectral data led to production of a good linear calibration with R(2)=0.99. In addition, the subsequent predictions acquired from an independent validation set for the target FA compositions (i.e., total oil, total omega-3 fatty acids, EPA and DHA) were highly accurate when compared to the actual values obtained from standard GC-based technique, with plots between predicted versus actual values resulting in excellent linear fitting (R(2)≥0.96) in all cases. The study therefore demonstrated not only the substantial advantage of the ATR-FTIR technique in terms of rapidness and cost effectiveness, but also its potential application as a rapid, potentially automated, online monitoring technique for the routine analysis of FA composition in industrial processes when used together with the multivariate data analysis modelling.",
"title": ""
},
{
"docid": "4c7624e4d1674a753fb54d2a826c3666",
"text": "We tackle the question: how much supervision is needed to achieve state-of-the-art performance in part-of-speech (POS) tagging, if we leverage lexical representations given by the model of Brown et al. (1992)? It has become a standard practice to use automatically induced “Brown clusters” in place of POS tags. We claim that the underlying sequence model for these clusters is particularly well-suited for capturing POS tags. We empirically demonstrate this claim by drastically reducing supervision in POS tagging with these representations. Using either the bit-string form given by the algorithm of Brown et al. (1992) or the (less well-known) embedding form given by the canonical correlation analysis algorithm of Stratos et al. (2014), we can obtain 93% tagging accuracy with just 400 labeled words and achieve state-of-the-art accuracy (> 97%) with less than 1 percent of the original training data.",
"title": ""
},
{
"docid": "54af3c39dba9aafd5b638d284fd04345",
"text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).",
"title": ""
},
{
"docid": "4bc910cb711aab699d9ec4e81cd0ce17",
"text": "This study examined the links between desensitization to violent media stimuli and habitual media violence exposure as a predictor and aggressive cognitions and behavior as outcome variables. Two weeks after completing measures of habitual media violence exposure, trait aggression, trait arousability, and normative beliefs about aggression, undergraduates (N = 303) saw a violent film clip and a sad or a funny comparison clip. Skin conductance level (SCL) was measured continuously, and ratings of anxious and pleasant arousal were obtained after each clip. Following the clips, participants completed a lexical decision task to measure accessibility of aggressive cognitions and a competitive reaction time task to measure aggressive behavior. Habitual media violence exposure correlated negatively with SCL during violent clips and positively with pleasant arousal, response times for aggressive words, and trait aggression, but it was unrelated to anxious arousal and aggressive responding during the reaction time task. In path analyses controlling for trait aggression, normative beliefs, and trait arousability, habitual media violence exposure predicted faster accessibility of aggressive cognitions, partly mediated by higher pleasant arousal. Unprovoked aggression during the reaction time task was predicted by lower anxious arousal. Neither habitual media violence usage nor anxious or pleasant arousal predicted provoked aggression during the laboratory task, and SCL was unrelated to aggressive cognitions and behavior. No relations were found between habitual media violence viewing and arousal in response to the sad and funny film clips, and arousal in response to the sad and funny clips did not predict aggressive cognitions or aggressive behavior on the laboratory task. This suggests that the observed desensitization effects are specific to violent content.",
"title": ""
},
{
"docid": "697ed30a5d663c1dda8be0183fa4a314",
"text": "Due to the Web expansion, the prediction of online news popularity is becoming a trendy research topic. In this paper, we propose a novel and proactive Intelligent Decision Support System (IDSS) that analyzes articles prior to their publication. Using a broad set of extracted features (e.g., keywords, digital media content, earlier popularity of news referenced in the article) the IDSS first predicts if an article will become popular. Then, it optimizes a subset of the articles features that can more easily be changed by authors, searching for an enhancement of the predicted popularity probability. Using a large and recently collected dataset, with 39,000 articles from the Mashable website, we performed a robust rolling windows evaluation of five state of the art models. The best result was provided by a Random Forest with a discrimination power of 73%. Moreover, several stochastic hill climbing local searches were explored. When optimizing 1000 articles, the best optimization method obtained a mean gain improvement of 15 percentage points in terms of the estimated popularity probability. These results attest the proposed IDSS as a valuable tool for online news authors.",
"title": ""
}
] |
scidocsrr
|
812c20e8c1622b163e27de3adfd6640e
|
Supplier Selection Problems in Fashion Business Operations with Sustainability Considerations
|
[
{
"docid": "6b4a4e5271f5a33d3f30053fc6c1a4ff",
"text": "Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "470810494ae81cc2361380c42116c8d7",
"text": "Sustainability is significantly important for fashion business due to consumers’ increasing awareness of environment. When a fashion company aims to promote sustainability, the main linkage is to develop a sustainable supply chain. This paper contributes to current knowledge of sustainable supply chain in the textile and clothing industry. We first depict the structure of sustainable fashion supply chain including eco-material preparation, sustainable manufacturing, green distribution, green retailing, and ethical consumers based on the extant literature. We study the case of the Swedish fast fashion company, H&M, which has constructed its sustainable supply chain in developing eco-materials, providing safety training, monitoring sustainable manufacturing, reducing carbon emission in distribution, and promoting eco-fashion. Moreover, based on the secondary data and analysis, we learn the lessons of H&M’s sustainable fashion supply chain from the country perspective: (1) the H&M’s sourcing managers may be more likely to select suppliers in the countries with lower degrees of human wellbeing; (2) the H&M’s supply chain manager may set a higher level of inventory in a country with a higher human wellbeing; and (3) the H&M CEO may consider the degrees of human wellbeing and economic wellbeing, instead of environmental wellbeing when launching the online shopping channel in a specific country.",
"title": ""
}
] |
[
{
"docid": "ab6d4dbaf92c142dfce0c8133e7ae669",
"text": "This paper presents a high-performance substrate-integrated-waveguide RF microelectromechanical systems (MEMS) tunable filter for 1.2-1.6-GHz frequency range. The proposed filter is developed using packaged RF MEMS switches and utilizes a two-layer structure that effectively isolates the cavity filter from the RF MEMS switch circuitry. The two-pole filter implemented on RT/Duroid 6010LM exhibits an insertion loss of 2.2-4.1 dB and a return loss better than 15 dB for all tuning states. The relative bandwidth of the filter is 3.7 ± 0.5% over the tuning range. The measured Qu of the filter is 93-132 over the tuning range, which is the best reported Q in filters using off-the-shelf RF MEMS switches on conventional printed circuit board substrates. In addition, an upper stopband rejection better than 28 dB is obtained up to 4.0 GHz by employing low-pass filters at the bandpass filter terminals at the cost of 0.7-1.0-dB increase in the insertion loss.",
"title": ""
},
{
"docid": "5c8eeecbd286273e319c860626b2ecf2",
"text": "Online user-generated content in various social media websites, such as consumer experiences, user feedback, and product reviews, has increasingly become the primary information source for both consumers and businesses. In this study, we aim to look beyond the quantitative summary and unidimensional interpretation of online user reviews to provide a more comprehensive view of online user-generated content. Moreover, we would like to extend the current literature to the more customer-driven service industries, particularly the hotel industry. We obtain a unique and extensive dataset of online user reviews for hotels across various review sites and over long time periods. We use the sentiment analysis technique to decompose user reviews into different dimensions to measure hotel service quality and performance based on the SERVPERF model. Those dimensions are then incorporated into econometrics models to examine their effect in shaping users’ overall evaluation and content-generating behavior. The results suggest that different dimensions of user reviews have significantly different effects in forming user evaluation and driving content generation. This paper demonstrates the importance of using textual data to measure consumers’ relative preferences for service quality and evaluate service performance.",
"title": ""
},
{
"docid": "04a63e01c21c45f06302935b700d5b86",
"text": "This paper presents the method and results of calculating the output voltage of the boost-converter with Cockroft-Walton voltage multiplier according to the values of the capacitors and the load. The combination of boost-converter with voltage multiplier provides a very high voltage gain, which can be adjusted by changing the duty cycle of the boost converter. It is shown that the output voltage of the converter is a complex function of the switching frequency, duty cycle, number and value of the capacitors and load resistance. The proposed method for calculating the external characteristic is based on the analysis of the energy losses in the capacitors of the multiplier. The formulas and graphs for estimating the output voltage as a function of the converter and load parameters are obtained. Experimental results confirmed the theoretical expectations.",
"title": ""
},
{
"docid": "305a6b7cfcc560e1356fa7a44fee8de2",
"text": "Power MOSFET designs have been moving to higher performance particularly in the medium voltage area. (60V to 300V) New designs require lower specific on-resistance (RSP) thus forcing designers to push the envelope of increasing the electric field stress on the shielding oxide, reducing the cell pitch, and increasing the epitaxial (epi) drift doping to reduce on resistance. In doing so, time dependant avalanche instabilities have become a concern for oxide charge balanced power MOSFETs. Avalanche instabilities can initiate in the active cell and/or the termination structures. These instabilities cause the avalanche breakdown to increase and/or decrease with increasing time in avalanche. They become a reliability risk when the drain to source breakdown voltage (BVdss) degrades below the operating voltage of the application circuit. This paper will explain a mechanism for these avalanche instabilities and propose an optimum design for the charge balance region. TCAD simulation was employed to give insight to the mechanism. Finally, measured data will be presented to substantiate the theory.",
"title": ""
},
{
"docid": "f69b9816e8f8716d12eaa43e3d1222f4",
"text": "BACKGROUND\nIn 1986, the European Organization for Research and Treatment of Cancer (EORTC) initiated a research program to develop an integrated, modular approach for evaluating the quality of life of patients participating in international clinical trials.\n\n\nPURPOSE\nWe report here the results of an international field study of the practicality, reliability, and validity of the EORTC QLQ-C30, the current core questionnaire. The QLQ-C30 incorporates nine multi-item scales: five functional scales (physical, role, cognitive, emotional, and social); three symptom scales (fatigue, pain, and nausea and vomiting); and a global health and quality-of-life scale. Several single-item symptom measures are also included.\n\n\nMETHODS\nThe questionnaire was administered before treatment and once during treatment to 305 patients with nonresectable lung cancer from centers in 13 countries. Clinical variables assessed included disease stage, weight loss, performance status, and treatment toxicity.\n\n\nRESULTS\nThe average time required to complete the questionnaire was approximately 11 minutes, and most patients required no assistance. The data supported the hypothesized scale structure of the questionnaire with the exception of role functioning (work and household activities), which was also the only multi-item scale that failed to meet the minimal standards for reliability (Cronbach's alpha coefficient > or = .70) either before or during treatment. Validity was shown by three findings. First, while all interscale correlations were statistically significant, the correlation was moderate, indicating that the scales were assessing distinct components of the quality-of-life construct. Second, most of the functional and symptom measures discriminated clearly between patients differing in clinical status as defined by the Eastern Cooperative Oncology Group performance status scale, weight loss, and treatment toxicity. Third, there were statistically significant changes, in the expected direction, in physical and role functioning, global quality of life, fatigue, and nausea and vomiting, for patients whose performance status had improved or worsened during treatment. The reliability and validity of the questionnaire were highly consistent across the three language-cultural groups studied: patients from English-speaking countries, Northern Europe, and Southern Europe.\n\n\nCONCLUSIONS\nThese results support the EORTC QLQ-C30 as a reliable and valid measure of the quality of life of cancer patients in multicultural clinical research settings. Work is ongoing to examine the performance of the questionnaire among more heterogenous patient samples and in phase II and phase III clinical trials.",
"title": ""
},
{
"docid": "a5b0bf255205527c699c0cf3f7ee5270",
"text": "This paper proposes a deep learning approach for accelerating magnetic resonance imaging (MRI) using a large number of existing high quality MR images as the training datasets. An off-line convolutional neural network is designed and trained to identify the mapping relationship between the MR images obtained from zero-filled and fully-sampled k-space data. The network is not only capable of restoring fine structures and details but is also compatible with online constrained reconstruction methods. Experimental results on real MR data have shown encouraging performance of the proposed method for efficient and accurate imaging.",
"title": ""
},
{
"docid": "d69e8f1e75d74345a93f4899b2a0f073",
"text": "CONTEXT\nThis paper provides an overview of the contribution of medical education research which has employed focus group methodology to evaluate both undergraduate education and continuing professional development.\n\n\nPRACTICALITIES AND PROBLEMS\nIt also examines current debates about the ethics and practicalities involved in conducting focus group research. It gives guidance as to how to go about designing and planning focus group studies, highlighting common misconceptions and pitfalls, emphasising that most problems stem from researchers ignoring the central assumptions which underpin the qualitative research endeavour.\n\n\nPRESENTING AND DEVELOPING FOCUS GROUP RESEARCH\nParticular attention is paid to analysis and presentation of focus group work and the uses to which such information is put. Finally, it speculates about the future of focus group research in general and research in medical education in particular.",
"title": ""
},
{
"docid": "49bd1cdbeea10f39a2b34cfa5baac0ef",
"text": "Recently, image inpainting task has revived with the help of deep learning techniques. Deep neural networks, especially the generative adversarial networks~(GANs) make it possible to recover the missing details in images. Due to the lack of sufficient context information, most existing methods fail to get satisfactory inpainting results. This work investigates a more challenging problem, e.g., the newly-emerging semantic image inpainting - a task to fill in large holes in natural images. In this paper, we propose an end-to-end framework named progressive generative networks~(PGN), which regards the semantic image inpainting task as a curriculum learning problem. Specifically, we divide the hole filling process into several different phases and each phase aims to finish a course of the entire curriculum. After that, an LSTM framework is used to string all the phases together. By introducing this learning strategy, our approach is able to progressively shrink the large corrupted regions in natural images and yields promising inpainting results. Moreover, the proposed approach is quite fast to evaluate as the entire hole filling is performed in a single forward pass. Extensive experiments on Paris Street View and ImageNet dataset clearly demonstrate the superiority of our approach. Code for our models is available at https://github.com/crashmoon/Progressive-Generative-Networks.",
"title": ""
},
{
"docid": "8222f36e2aa06eac76085fb120c8edab",
"text": "Small jobs, that are typically run for interactive data analyses in datacenters, continue to be plagued by disproportionately long-running tasks called stragglers. In the production clusters at Facebook and Microsoft Bing, even after applying state-of-the-art straggler mitigation techniques, these latency sensitive jobs have stragglers that are on average 8 times slower than the median task in that job. Such stragglers increase the average job duration by 47%. This is because current mitigation techniques all involve an element of waiting and speculation. We instead propose full cloning of small jobs, avoiding waiting and speculation altogether. Cloning of small jobs only marginally increases utilization because workloads show that while the majority of jobs are small, they only consume a small fraction of the resources. The main challenge of cloning is, however, that extra clones can cause contention for intermediate data. We use a technique, delay assignment, which efficiently avoids such contention. Evaluation of our system, Dolly, using production workloads shows that the small jobs speedup by 34% to 46% after state-of-the-art mitigation techniques have been applied, using just 5% extra resources for cloning.",
"title": ""
},
{
"docid": "35894d8bc2e3e8e03b47801976a88554",
"text": "Visualization of brand positioning based on consumer web search information: using social network analysis Seung-Pyo Jun Do-Hyung Park Article information: To cite this document: Seung-Pyo Jun Do-Hyung Park , (2017),\" Visualization of brand positioning based on consumer web search information: using social network analysis \", Internet Research, Vol. 27 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/IntR-02-2016-0037",
"title": ""
},
{
"docid": "8cfab085b67d5facd0519a5d3002e07c",
"text": "Identifying a speaker's native language (L1), i.e., mother tongue, based upon non-native English (L2) speech input, is both challenging and useful for many human-machine voice interface applications, e.g., computer assisted language learning (CALL). In this paper, we improve our sub-phone TDNN based i-vector approach to L1 recognition with a more accurate TDNN-derived VAD and a highly discriminative classifier. Two TDNNs are separately trained on native and non-native English, LVCSR corpora, for contrasting their corresponding sub-phone posteriors and resultant supervectors. The derived i-vectors are then exploited for improving the performance further. Experimental results on a database of 25 L1s show a 3.1% identification rate improvement, from 78.7% to 81.8%, compared with a high performance baseline system which has already achieved the best published results on the 2016 ComParE corpus of only 11 L1s. The statistical analysis of the features used in our system provides useful findings, e.g. pronunciation similarity among the non-native English speakers with different L1s, for research on second-language (L2) learning and assessment.",
"title": ""
},
{
"docid": "bc892fe2a369f701e0338085eaa0bdbd",
"text": "In his In the blink of an eye,Walter Murch, the Oscar-awarded editor of the English Patient, Apocalypse Now, and many other outstanding movies, devises the Rule of Six—six criteria for what makes a good cut. On top of his list is \"to be true to the emotion of the moment,\" a quality more important than advancing the story or being rhythmically interesting. The cut has to deliver a meaningful, compelling, and emotion-rich \"experience\" to the audience. Because, \"what they finally remember is not the editing, not the camerawork, not the performances, not even the story—it’s how they felt.\" Technology for all the right reasons applies this insight to the design of interactive products and technologies—the domain of Human-Computer Interaction,Usability Engineering,and Interaction Design. It takes an experiential approach, putting experience before functionality and leaving behind oversimplified calls for ease, efficiency, and automation or shallow beautification. Instead, it explores what really matters to humans and what it needs to make technology more meaningful. The book clarifies what experience is, and highlights five crucial aspects and their implications for the design of interactive products. It provides reasons why we should bother with an experiential approach, and presents a detailed working model of experience useful for practitioners and academics alike. It closes with the particular challenges of an experiential approach for design. The book presents its view as a comprehensive, yet entertaining blend of scientific findings, design examples, and personal anecdotes.",
"title": ""
},
{
"docid": "1223a45c3a2cebe4ce2e94d4468be946",
"text": "In this paper, we present an overview of energy storage in renewable energy systems. In fact, energy storage is a dominant factor. It can reduce power fluctuations, enhances the system flexibility, and enables the storage and dispatching of the electricity generated by variable renewable energy sources such as wind and solar. Different storage technologies are used in electric power systems. They can be chemical, electrochemical, mechanical, electromagnetic or thermal. Energy storage facility is comprised of a storage medium, a power conversion system and a balance of plant. In this work, an application to photovoltaic and wind electric power systems is made. The results obtained under Matlab/Simulink are presented.",
"title": ""
},
{
"docid": "49bc648b7588e3d6d512a65688ce23aa",
"text": "Many Chinese websites (relying parties) use OAuth 2.0 as the basis of a single sign-on service to ease password management for users. Many sites support five or more different OAuth 2.0 identity providers, giving users choice in their trust point. However, although OAuth 2.0 has been widely implemented (particularly in China), little attention has been paid to security in practice. In this paper we report on a detailed study of OAuth 2.0 implementation security for ten major identity providers and 60 relying parties, all based in China. This study reveals two critical vulnerabilities present in many implementations, both allowing an attacker to control a victim user’s accounts at a relying party without knowing the user’s account name or password. We provide simple, practical recommendations for identity providers and relying parties to enable them to mitigate these vulnerabilities. The vulnerabilities have been reported to the parties concerned.",
"title": ""
},
{
"docid": "bcbcb23a0681ef063a37b94ccc26b00c",
"text": "Race and racism persist online in ways that are both new and unique to the Internet, alongside vestiges of centuries-old forms that reverberate significantly both offline and on. As we mark 15 years into the field of Internet studies, it becomes necessary to assess what the extant research tells us about race and racism. This paper provides an analysis of the literature on race and racism in Internet studies in the broad areas of (1) race and the structure of the Internet, (2) race and racism matters in what we do online, and (3) race, social control and Internet law. Then, drawing on a range of theoretical perspectives, including Hall’s spectacle of the Other and DuBois’s view of white culture, the paper offers an analysis and critique of the field, in particular the use of racial formation theory. Finally, the paper points to the need for a critical understanding of whiteness in Internet studies.",
"title": ""
},
{
"docid": "d5378436042ce2e7913d9071669732b6",
"text": "We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewise-smooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the model-based solver tested performs better than the two direct search solvers tested.",
"title": ""
},
{
"docid": "c5d74c69c443360d395a8371055ef3e2",
"text": "The supply of oxygen and nutrients and the disposal of metabolic waste in the organs depend strongly on how blood, especially red blood cells, flow through the microvascular network. Macromolecular plasma proteins such as fibrinogen cause red blood cells to form large aggregates, called rouleaux, which are usually assumed to be disaggregated in the circulation due to the shear forces present in bulk flow. This leads to the assumption that rouleaux formation is only relevant in the venule network and in arterioles at low shear rates or stasis. Thanks to an excellent agreement between combined experimental and numerical approaches, we show that despite the large shear rates present in microcapillaries, the presence of either fibrinogen or the synthetic polymer dextran leads to an enhanced formation of robust clusters of red blood cells, even at haematocrits as low as 1%. Robust aggregates are shown to exist in microcapillaries even for fibrinogen concentrations within the healthy physiological range. These persistent aggregates should strongly affect cell distribution and blood perfusion in the microvasculature, with putative implications for blood disorders even within apparently asymptomatic subjects.",
"title": ""
},
{
"docid": "a698752bf7cf82e826848582816b1325",
"text": "The incidence and context of stotting were studied in Thomson's gazelles. Results suggested that gazelles were far more likely to stot in response to coursing predators, such as wild dogs, than they were to stalking predators, such as cheetahs. During hunts, gazelles that wild dogs selected stotted at lower rates than those they did not select. In addition, those which were chased, but which outran the predators, were more likely to stot, and stotted for longer durations, than those which were chased and killed. In response to wild dogs, gazelles in the dry season, which were probably in poor condition, were less likely to stot, and stotted at lower rates, than those in the wet season. We suggest that stotting could be an honest signal of a gazelle's ability to outrun predators, which coursers take into account when selecting prey.",
"title": ""
},
{
"docid": "1e2a64369279d178ee280ed7e2c0f540",
"text": "We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.",
"title": ""
},
{
"docid": "53dcdeb8e8368864fb795395dd151fd2",
"text": "Superposition coding is a well-known capacity-achieving coding scheme for stochastically degraded broadcast channels. Although well-studied in theory, it is important to understand issues that arise when implementing this scheme in a practical setting. In this paper, we present a software-radio based design of a superposition coding system on the GNU Radio platform with the Universal Software Radio Peripheral acting as the transceiver frontend. We also study the packet error performance and discuss some issues that arise in its implementation.",
"title": ""
}
] |
scidocsrr
|
e9db051b5c0473f339d833b56956c04d
|
Synthesizing Union Tables from the Web
|
[
{
"docid": "eedcff8c2a499e644d1343b353b2a1b9",
"text": "We consider the problem of finding related tables in a large corpus of heterogenous tables. Detecting related tables provides users a powerful tool for enhancing their tables with additional data and enables effective reuse of available public data. Our first contribution is a framework that captures several types of relatedness, including tables that are candidates for joins and tables that are candidates for union. Our second contribution is a set of algorithms for detecting related tables that can be either unioned or joined. We describe a set of experiments that demonstrate that our algorithms produce highly related tables. We also show that we can often improve the results of table search by pulling up tables that are ranked much lower based on their relatedness to top-ranked tables. Finally, we describe how to scale up our algorithms and show the results of running it on a corpus of over a million tables extracted from Wikipedia.",
"title": ""
},
{
"docid": "a15f80b0a0ce17ec03fa58c33c57d251",
"text": "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google’s general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own “schema” of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links. ∗Work done while all authors were at Google, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer cial advantage, the VLDB copyright notice and the title of the publication an d its date appear, and notice is given that copying is by permission of the Very L arge Data Base Endowment. To copy otherwise, or to republish, to post o n servers or to redistribute to lists, requires a fee and/or special pe rmission from the publisher, ACM. VLDB ’08 Auckland, New Zealand Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/ 00.",
"title": ""
},
{
"docid": "40ec8caea52ba75a6ad1e100fb08e89a",
"text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.",
"title": ""
},
{
"docid": "a86840c1c1c6bef15889fd0e62815402",
"text": "The Web offers a corpus of over 100 million tables [6], but the meaning of each table is rarely explicit from the table itself. Header rows exist in few cases and even when they do, the attribute names are typically useless. We describe a system that attempts to recover the semantics of tables by enriching the table with additional annotations. Our annotations facilitate operations such as searching for tables and finding related tables. To recover semantics of tables, we leverage a database of class labels and relationships automatically extracted from the Web. The database of classes and relationships has very wide coverage, but is also noisy. We attach a class label to a column if a sufficient number of the values in the column are identified with that label in the database of class labels, and analogously for binary relationships. We describe a formal model for reasoning about when we have seen sufficient evidence for a label, and show that it performs substantially better than a simple majority scheme. We describe a set of experiments that illustrate the utility of the recovered semantics for table search and show that it performs substantially better than previous approaches. In addition, we characterize what fraction of tables on the Web can be annotated using our approach.",
"title": ""
},
{
"docid": "c9e47bfe0f1721a937ba503ed9913dba",
"text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.",
"title": ""
}
] |
[
{
"docid": "c591881de09c709ae2679cacafe24008",
"text": "This paper discusses a technique to estimate the position of a sniper using a spatial microphone array placed on elevated platforms. The shooter location is obtained from the exact location of the microphone array, from topographic information of the area and from an estimated direction of arrival (DoA) of the acoustic wave related to the explosion in the gun barrel, which is known as muzzle blast. The estimation of the DOA is based on the time differences the sound wavefront arrives at each pair of microphones, employing a technique known as Generalized Cross Correlation (GCC) with phase transform. The main idea behind the localization procedure used herein is that, based on the DoA, the acoustical path of the muzzle blast (from the weapon to the microphone) can be marked as a straight line on a terrain profile obtained from an accurate digital map, allowing the estimation of the shooter location whenever the microphone array is located on an dominant position. In addition, a new approach to improve the DoA estimation from a cognitive selection of microphones is introduced. In this technique, the microphones selected must form a consistent (sum of delays equal to zero) fundamental loop. The results obtained after processing muzzle blast gunshot signals recorded in a typical scenario, show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "48c851b54fb489cea937cdfac3ca8132",
"text": "This paper describes a new system, dubbed Continuous Appearance-based Trajectory SLAM (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao-Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM to FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.",
"title": ""
},
{
"docid": "854b2bfdef719879a437f2d87519d8e8",
"text": "The morality of transformational leadership has been sharply questioned, particularly by libertarians, “grass roots” theorists, and organizational development consultants. This paper argues that to be truly transformational, leadership must be grounded in moral foundations. The four components of authentic transformational leadership (idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration) are contrasted with their counterfeits in dissembling pseudo-transformational leadership on the basis of (1) the moral character of the leaders and their concerns for self and others; (2) the ethical values embedded in the leaders’ vision, articulation, and program, which followers can embrace or reject; and (3) the morality of the processes of social ethical choices and action in which the leaders and followers engage and collectively pursue. The literature on transformational leadership is linked to the long-standing literature on virtue and moral character, as exemplified by Socratic and Confucian typologies. It is related as well to the major themes of the modern Western ethical agenda: liberty, utility, and distributive justice Deception, sophistry, and pretense are examined alongside issues of transcendence, agency, trust, striving for congruence in values, cooperative action, power, persuasion, and corporate governance to establish the strategic and moral foundations of authentic transformational leadership.",
"title": ""
},
{
"docid": "919d86270951a89a14398ee796b4e542",
"text": "The role of the circadian clock in skin and the identity of genes participating in its chronobiology remain largely unknown, leading us to define the circadian transcriptome of mouse skin at two different stages of the hair cycle, telogen and anagen. The circadian transcriptomes of telogen and anagen skin are largely distinct, with the former dominated by genes involved in cell proliferation and metabolism. The expression of many metabolic genes is antiphasic to cell cycle-related genes, the former peaking during the day and the latter at night. Consistently, accumulation of reactive oxygen species, a byproduct of oxidative phosphorylation, and S-phase are antiphasic to each other in telogen skin. Furthermore, the circadian variation in S-phase is controlled by BMAL1 intrinsic to keratinocytes, because keratinocyte-specific deletion of Bmal1 obliterates time-of-day-dependent synchronicity of cell division in the epidermis leading to a constitutively elevated cell proliferation. In agreement with higher cellular susceptibility to UV-induced DNA damage during S-phase, we found that mice are most sensitive to UVB-induced DNA damage in the epidermis at night. Because in the human epidermis maximum numbers of keratinocytes go through S-phase in the late afternoon, we speculate that in humans the circadian clock imposes regulation of epidermal cell proliferation so that skin is at a particularly vulnerable stage during times of maximum UV exposure, thus contributing to the high incidence of human skin cancers.",
"title": ""
},
{
"docid": "5bee5208fa2676b7a7abf4ef01f392b8",
"text": "Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's \"omics\". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application.",
"title": ""
},
{
"docid": "eeaa7d079ef7239a9971aff9e86400fb",
"text": "We study the problem of scalable monitoring of operational 3G wireless networks. Threshold-based performance monitoring in large 3G networks is very challenging for two main factors: large network scale and dynamics in both time and spatial domains. A fine-grained threshold setting (e.g., perlocation hourly) incurs prohibitively high management complexity, while a single static threshold fails to capture the network dynamics, thus resulting in unacceptably poor alarm quality (up to 70% false/miss alarm rates). In this paper, we propose a scalable monitoring solution, called threshold-compression that can characterize the location- and time-specific threshold trend of each individual network element (NE) with minimal threshold setting. The main insight is to identify groups of NEs with similar threshold behaviors across location and time dimensions, forming spatial-temporal clusters to reduce the number of thresholds while maintaining acceptable alarm accuracy in a large-scale 3G network. Our evaluations based on the operational experience on a commercial 3G network have demonstrated the effectiveness of the proposed solution. We are able to reduce the threshold setting up to 90% with less than 10% false/miss alarms.",
"title": ""
},
{
"docid": "ba6fe1b26d76d7ff3e84ddf3ca5d3e35",
"text": "The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5- to 7-year-olds; N = 36) were presented with science lessons on 1 of 3 schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum.",
"title": ""
},
{
"docid": "2f761de3f94d86a2c73aac3dce413dca",
"text": "The class imbalance problem has been recognized in many practical domains and a hot topic of machine learning in recent years. In such a problem, almost all the examples are labeled as one class, while far fewer examples are labeled as the other class, usually the more important class. In this case, standard machine learning algorithms tend to be overwhelmed by the majority class and ignore the minority class since traditional classifiers seeking an accurate performance over a full range of instances. This paper reviewed academic activities special for the class imbalance problem firstly. Then investigated various remedies in four different levels according to learning phases. Following surveying evaluation metrics and some other related factors, this paper showed some future directions at last.",
"title": ""
},
{
"docid": "f5ad4e1901dc96de45cb191bf1869828",
"text": "The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixedlength vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the crossdomain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.",
"title": ""
},
{
"docid": "085f2b04f6f7c6d9a140d3ef027cbeca",
"text": "E-Government implementation and adoption is influenced by several factors having either an enhancing or an aggravating effect on e-government implementation and use. This paper aims at shedding light on obstacles hindering mainly e-government implementation from two perspectives: the supply- and the demand-side of e-government services. The contribution to research is seen in summarized insights into what obstacles in e-government were identified in prior research and the suggestion of a classification of obstacles into the two categories of formal and informal obstacles. Literature was reviewed following a conceptual model encompassing a merger and extension of existing approaches. A process of identifying obstacles and improving services in the form of a loop is discussed before possible future research lines will be pointed to.",
"title": ""
},
{
"docid": "640fd96e02d8aa69be488323f77b40ba",
"text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.",
"title": ""
},
{
"docid": "1e2767ace7b4d9f8ca2a5eee21684240",
"text": "Modern data analytics applications typically process massive amounts of data on clusters of tens, hundreds, or thousands of machines to support near-real-time decisions.The quantity of data and limitations of disk and memory bandwidth often make it infeasible to deliver answers at interactive speeds. However, it has been widely observed that many applications can tolerate some degree of inaccuracy. This is especially true for exploratory queries on data, where users are satisfied with \"close-enough\" answers if they can come quickly. A popular technique for speeding up queries at the cost of accuracy is to execute each query on a sample of data, rather than the whole dataset. To ensure that the returned result is not too inaccurate, past work on approximate query processing has used statistical techniques to estimate \"error bars\" on returned results. However, existing work in the sampling-based approximate query processing (S-AQP) community has not validated whether these techniques actually generate accurate error bars for real query workloads. In fact, we find that error bar estimation often fails on real world production workloads. Fortunately, it is possible to quickly and accurately diagnose the failure of error estimation for a query. In this paper, we show that it is possible to implement a query approximation pipeline that produces approximate answers and reliable error bars at interactive speeds.",
"title": ""
},
{
"docid": "0bd7c453279c97333e7ac6c52f7127d8",
"text": "Among various biometric modalities, signature verification remains one of the most widely used methods to authenticate the identity of an individual. Signature verification, the most important component of behavioral biometrics, has attracted significant research attention over the last three decades. Despite extensive research, the problem still remains open to research due to the variety of challenges it offers. The high intra-class variations in signatures resulting from different physical or mental states of the signer, the differences that appear with aging and the visual similarity in case of skilled forgeries etc. are only a few of the challenges to name. This paper is intended to provide a review of the recent advancements in offline signature verification with a discussion on different types of forgeries, the features that have been investigated for this problem and the classifiers employed. The pros and cons of notable recent contributions to this problem have also been presented along with a discussion of potential future research directions on this subject.",
"title": ""
},
{
"docid": "bf126b871718a5ee09f1e54ea5052d20",
"text": "Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel DenseNet based FCN architecture for cardiac segmentation which is parameter and memory efficient. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures. In order to process the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and Dice loss leading to qualitative improvements in segmentation. We demonstrate computational efficacy of incorporating conventional computer vision techniques for region of interest detection in an end-to-end deep learning based segmentation framework. From the segmentation maps we extract clinically relevant cardiac parameters and hand-craft features which reflect the clinical diagnostic analysis and train an ensemble system for cardiac disease classification. We validate our proposed network architecture on three publicly available datasets, namely: (i) Automated Cardiac Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular segmentation challenge (LV-2011), (iii) 2015 Kaggle Data Science Bowl cardiac challenge data. Our approach in ACDC-2017 challenge stood second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100% on a limited testing set (n=50). In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. In the Kaggle challenge our approach for LV volume gave a Continuous Ranked Probability Score (CRPS) of 0.0127, which would have placed us tenth in the original challenge. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computationally efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application.",
"title": ""
},
{
"docid": "37257f51eddbad5d7a151c12083e51a7",
"text": "As data rate pushes to 10Gbps and beyond, timing jitter has become one of the major factors that limit the link performance. Thorough understanding of the link jitter characteristics and accurate modeling of their impact on link performance is a must even at early design stage. This paper discusses the characteristics of timing jitter in typical I/O interfaces and overviews various jitter modeling methods proposed in the literature during the past few years. Recommendations are given based on the characteristics of timing jitter and their locations.",
"title": ""
},
{
"docid": "4007287be14b0cc732f5c87458f01147",
"text": "In view of the importance of molecular sensing in the function of the gastrointestinal (GI) tract, we assessed whether signal transduction proteins that mediate taste signaling are expressed in cells of the human gut. Here, we demonstrated that the alpha-subunit of the taste-specific G protein gustducin (Galpha(gust)) is expressed prominently in cells of the human colon that also contain chromogranin A, an established marker of endocrine cells. Double-labeling immunofluorescence and staining of serial sections demonstrated that Galpha(gust) localized to enteroendocrine L cells that express peptide YY and glucagon-like peptide-1 in the human colonic mucosa. We also found expression of transcripts encoding human type 2 receptor (hT2R) family members, hT1R3, and Galpha(gust) in the human colon and in the human intestinal endocrine cell lines (HuTu-80 and NCI-H716 cells). Stimulation of HuTu-80 or NCI-H716 cells with the bitter-tasting compound phenylthiocarbamide, which binds hT2R38, induced a rapid increase in the intracellular Ca2+ concentration in these cells. The identification of Galpha(gust) and chemosensory receptors that perceive chemical components of ingested substances, including drugs and toxins, in open enteroendocrine L cells has important implications for understanding molecular sensing in the human GI tract and for developing novel therapeutic compounds that modify the function of these receptors in the gut.",
"title": ""
},
{
"docid": "2b40b00fdfc367ace038e2a6409be744",
"text": "Recent advances in digital imaging technology, computational speed, storage capacity and networking have made it possible to capture, manipulate, store, and transmit images at interactive speeds with equipment available at every home or business. As a result, images have become a dominant part of information exchange. They are used for entertainment, education, commerce, medicine, science, and other applications. The rapid accumulation of large collections of digital images has created the need for efficient and intelligent schemes for image classification. Texture is an important feature of objects in an image .Nowadays there has been a great interest in the development of texture based Image Classification methods in many different areas. Most of the image texture classification systems use the gray-level co-occurrence matrices (GLCM) and selforganizing map (SOM) methods. The GLCM is a matrix of how often different combinations of pixel brightness values (grey levels) occur in an image. The GLCM matrices extracted from an image database are processed to create the training data set for a SOM neural network. The SOM model organizes and extracts prototypes from processed GLCM matrices.",
"title": ""
},
{
"docid": "fb54ca0c25ffe37cf9bab5677f52c341",
"text": "Convolutional networks (ConvNets) have become a popular approach to computer vision. Here we consider the parallelization of ConvNet training, which is computationally costly. Our novel parallel algorithm is based on decomposition into a set of tasks, most of which are convolutions or FFTs. Theoretical analysis suggests that linear speedup with the number of processors is attainable. To attain such performance on real shared-memory machines, our algorithm computes convolutions converging on the same node of the network with temporal locality to reduce cache misses, and sums the convergent convolution outputs via an almost wait-free concurrent method to reduce time spent in critical sections. Benchmarking with multi-core CPUs shows speedup roughly equal to the number of physical cores. We also demonstrate 90x speedup on a many-core CPU (Xeon Phi Knights Corner). Our algorithm can be either faster or slower than certain GPU implementations depending on specifics of the network architecture, kernel sizes, and density and size of the output patch.",
"title": ""
},
{
"docid": "40b30e582b98a1192b52e740193807ca",
"text": "Strenuous exercise is known to induce oxidative stress leading to the generation of free radicals. The purpose of the present study was to investigate the effects of lycopene, an antioxidant nutrient, at a relatively low dose (2.6 mg/kg per d) and a relatively high dose (7.8 mg/kg per d) on the antioxidant status of blood and skeletal muscle tissues in rats after exhaustive exercise. Rats were divided into six groups: sedentary control (C); sedentary control with low-dose lycopene (CLL); sedentary control with high-dose lycopene (CHL); exhaustive exercise (E); exhaustive exercise with low-dose lycopene (ELL); exhaustive exercise with high-dose lycopene (EHL). After 30 d, the rats in the three C groups were killed without exercise, but the rats in the three E groups were killed immediately after an exhaustive running test on a motorised treadmill. The results showed that xanthine oxidase (XO) activities of plasma and muscle, and muscular myeloperoxidase (MPO) activity in group E were significantly increased compared with group C. Compared with group E, the elevations of XO and MPO activities of muscle were significantly decreased in group EHL. The malondialdehyde concentrations of plasma and tissues in group E were significantly increased by 72 and 114 %, respectively, compared with those in group C. However, this phenomenon was prevented in rats of the ELL and EHL groups. There was no significant difference in the GSH concentrations of erythrocytes in each group; however, exhaustive exercise resulted in a significant decrease in the GSH content of muscle. In conclusion, these results suggested that lycopene protected muscle tissue from oxidative stress after exhaustive exercise.",
"title": ""
},
{
"docid": "36bc32033cbecf8ee00c5ec84ef26cfa",
"text": "Most of the device's technology has been moving towards the complex and produce of Nano-IC with demands for cheaper cost, smaller size and better thermal and electrical performance. One of the marketable packages is Quad Flat No-Lead (QFN) package. Due to the high demand of miniaturization of electronic products, QFN development becomes more promising, such as the lead frame design with half edge, cheaper tape, shrinkage of package size as to achieve more units per lead frame (cost saving) and etc [1]. The improvement methods in the lead frame design, such as lead frame metal tie bar and half edge features are always the main challenges for QFN package. With reduced the size of metal tie bar, it will fasten the package singulation process, whereas the half edge is designed for the mold compound locking for delamination reduction purpose. This paper specifically will discuss how the critical wire bonding parameters, capillary design and environmental conditions interact each other result to the unstable leads (second bond failures). During the initial evaluation of new package SOT1261 with rough PPF lead frame, several short tails and fish tails observed on wedge bond when applied with the current parameter setting which have been qualified in other packages with same wire size (18um Au wire). These problems did not surface out in earlier qualified devices mainly due to the second bond parameter robustness, capillary designs, lead frame design changes, different die packages, lead frame batches and contamination levels. One of the main root cause been studied is the second bond parameter setting which is not robust enough for the flimsy lead frame. The new bonding methodology, with the concept of low base ultrasonic and high force setting applied together with scrubbing mechanism to eliminate the fish tail bond and also reduce short tail occurrence on wedge. Wire bond parameters optimized to achieve zero fish tail, and wedge pull reading with >4.0gf. Destructive test such as wedge pull test used to test the bonding quality. Failure modes are analyzed using high power optical scope microscope and Scanning Electronic Microscope (SEM). By looking through into all possible root causes, and identifying how the factors are interacting, some efforts on the Design of Experiments (DOE) are carried out and good solutions were implemented.",
"title": ""
}
] |
scidocsrr
|
46847a99b4f22f6b60a6e8fb0414ca31
|
Likert scales, levels of measurement and the "laws" of statistics.
|
[
{
"docid": "eea651fa89b83f46030615fed0ca1dac",
"text": "Dipping my toe into the water of educational research, I have recently used Likert-type rating scales to measure student views on various educational interventions. Likert scales are commonly used to measure attitude, providing a range of responses to a given question or statement . Typically, there are 5 categories of response, from (for example) 1 1⁄4 strongly disagree to 5 1⁄4 strongly agree, although there are arguments in favour of scales with 7 or with an even number of response categories.",
"title": ""
}
] |
[
{
"docid": "96b6e3a57c881870d7c7e6f5e4805262",
"text": "New text as data techniques offer a great promise: the ability to inductively discover measures that are useful for testing social science theories of interest from large collections of text. We introduce a conceptual framework for making causal inferences with discovered measures as a treatment or outcome. Our framework enables researchers to discover high-dimensional textual interventions and estimate the ways that observed treatments affect text-based outcomes. We argue that nearly all text-based causal inferences depend upon a latent representation of the text and we provide a framework to learn the latent representation. But estimating this latent representation, we show, creates new risks: we may introduce an identification problem or overfit. To address these risks we describe a split-sample framework and apply it to estimate causal effects from an experiment on immigration attitudes and a study on bureaucratic response. Our work provides a rigorous foundation for textbased causal inferences. ∗We thank Edo Airoldi, Peter Aronow, Matt Blackwell, Sarah Bouchat, Chris Felton, Mark Handcock, Erin Hartman, Rebecca Johnson, Gary King, Ian Lundberg, Rich Nielsen, Thomas Richardson, Matt Salganik, Melissa Sands, Fredrik Sävje, Arthur Spirling, Alex Tahk, Endre Tvinnereim, Hannah Waight, Hanna Wallach, Simone Zhang and numerous seminar participants for useful discussions about making causal inference with texts. We also thank Dustin Tingley for early conversations about potential SUTVA concerns with respect to STM and sequential experiments as a possible way to combat it. In addition, we thank a National Science Foundation grant under the Resource Implementations for Data Intensive Research program. †Ph.D. Candidate, Department of Politics, Princeton University, negami@princeton.edu ‡Ph.D. Candidate, Graduate School of Business, Stanford University, cjfong@stanford.edu §Associate Professor, Department of Political Science, University of Chicago, JustinGrimmer.org, grimmer@uchicago.edu. ¶Assistant Professor, Department of Political Science, University of California San Diego, meroberts@ucsd.edu ‖Assistant Professor, Department of Sociology, Princeton University, brandonstewart.org, bms4@princeton.edu",
"title": ""
},
{
"docid": "c49ffcb45cc0a7377d9cbdcf6dc07057",
"text": "Dermoscopy is an in vivo method for the early diagnosis of malignant melanoma and the differential diagnosis of pigmented lesions of the skin. It has been shown to increase diagnostic accuracy over clinical visual inspection in the hands of experienced physicians. This article is a review of the principles of dermoscopy as well as recent technological developments.",
"title": ""
},
{
"docid": "35f6a4ee2364aea9861b7606c8cb7d40",
"text": "The research on robust principal component analysis (RPCA) has been attracting much attention recently. The original RPCA model assumes sparse noise, and use the L1-norm to characterize the error term. In practice, however, the noise is much more complex and it is not appropriate to simply use a certainLp-norm for noise modeling. We propose a generative RPCA model under the Bayesian framework by modeling data noise as a mixture of Gaussians (MoG). The MoG is a universal approximator to continuous distributions and thus our model is able to fit a wide range of noises such as Laplacian, Gaussian, sparse noises and any combinations of them. A variational Bayes algorithm is presented to infer the posterior of the proposed model. All involved parameters can be recursively updated in closed form. The advantage of our method is demonstrated by extensive experiments on synthetic data, face modeling and background subtraction.",
"title": ""
},
{
"docid": "19269e78ef1aee1f4921230b42b6c4b6",
"text": "Traditional methods of motion segmentation use powerful geometric constraints to understand motion, but fail to leverage the semantics of high-level image understanding. Modern CNN methods of motion analysis, on the other hand, excel at identifying well-known structures, but may not precisely characterize well-known geometric constraints. In this work, we build a new statistical model of rigid motion flow based on classical perspective projection constraints. We then combine piecewise rigid motions into complex deformable and articulated objects, guided by semantic segmentation from CNNs and a second \"object-level\" statistical model. This combination of classical geometric knowledge combined with the pattern recognition abilities of CNNs yields excellent performance on a wide range of motion segmentation benchmarks, from complex geometric scenes to camouflaged animals.",
"title": ""
},
{
"docid": "fa6c797c1aad378198363ada5435f361",
"text": "The first workshop on Interactive Data Mining is held in Melbourne, Australia, on February 15, 2019 and is co-located with 12th ACM International Conference on Web Search and Data Mining (WSDM 2019). The goal of this workshop is to share and discuss research and projects that focus on interaction with and interactivity of data mining systems. The program includes invited speaker, presentation of research papers, and a discussion session.",
"title": ""
},
{
"docid": "a68244dedee73f87103a1e05a8c33b20",
"text": "Given the knowledge that the same or similar objects appear in a set of images, our goal is to simultaneously segment that object from the set of images. To solve this problem, known as the cosegmentation problem, we present a method based upon hierarchical clustering. Our framework first eliminates intra-class heterogeneity in a dataset by clustering similar images together into smaller groups. Then, from each image, our method extracts multiple levels of segmentation and creates connections between regions (e.g. superpixel) across levels to establish intra-image multi-scale constraints. Next we take advantage of the information available from other images in our group. We design and present an efficient method to create inter-image relationships, e.g. connections between image regions from one image to all other images in an image cluster. Given the intra & inter-image connections, we perform a segmentation of the group of images into foreground and background regions. Finally, we compare our segmentation accuracy to several other state-of-the-art segmentation methods on standard datasets, and also demonstrate the robustness of our method on real world data.",
"title": ""
},
{
"docid": "760edd83045a80dbb2231c0ffbef2ea7",
"text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.",
"title": ""
},
{
"docid": "acb2177446deb8e279deca87724dbdca",
"text": "All teachers acknowledge appropriate student behaviors and desired social skills and provide differential attention/response to inappropriate behaviors. (CL22) Evidence Review: Research demonstrates that teachers who establish an orderly and positive classroom environment by teaching and reinforcing rules and routines reduce behavior problems. Teacher's acknowledgement of appropriate behavior is related to both initial and long-term academic engagement and social success (Akin-Little et al. (2004); Cameron et al.(2001). Rewards (such as approval, praise, recognition, special privileges, points, or other incentives) are most effective in reinforcing students' appropriate behavior when teachers: Use small rewards frequently, rather than large rewards infrequently; Deliver rewards quickly after the desired behavior is exhibited; Reward behavior, not the individual, and communicate to students the specific behavior that led to the reward; Use several different kinds of rewards selected carefully to ensure that they are reinforcing positive behavior; and Gradually begin to reduce and then eliminate rewards. Research also shows that the amount of praise that students receive for appropriate behavior should exceed the amount of times they are corrected or reprimanded by a ratio of four to one to improve student academic and behavioral outcomes. Evidence Review: Deci, Koestner, and Ryan (2001) conducted a meta-analysis in which they examined the effect of extrinsic rewards on intrinsic motivation. They found that verbal rewards can enhance intrinsic motivation; however, verbal rewards are less likely to have a positive effect for children than for older individuals (i.e., college students). Verbal rewards can have a negative effect on intrinsic motivation if they are administered in a controlling rather than informational way. When presenting high-level interest tasks, the use of tangible rewards can have negative consequences for subsequent interest, persistence and preference for challenge, especially for children. Evidence Review: There is compelling meta-analytic evidence that appropriate disciplinary interventions including teacher reaction to students appropriate and inappropriate behavior produce positive change in student behavior. Simple and often subtle teacher reactions have been shown to decrease student misbehavior including eye contact, moving closer to the student, a shake of the head, a simple verbal reminder-ideally as privately and subtly as possible, reminder of the desired appropriate behavior, and simply telling the student to stop the inappropriate behavior (Madsen, Becker, & Thomas, 1968). Teachers should also quietly and privately acknowledge appropriate behavior.",
"title": ""
},
{
"docid": "c9aa8e3ca2f1fc9b4f6b745970e55eee",
"text": "Embedded systems for safety-critical applications often integrate multiple “functions” and must generally be fault-tolerant. These requirements lead to a need for mechanisms and services that provide protection against fault propagation and ease the construction of distributed fault-tolerant applications. A number of bus architectures have been developed to satisfy this need. This paper reviews the requirements on these architectures, the mechanisms employed, and the services provided. Four representative architectures (SAFEbus TM , SPIDER, TTA, and FlexRay) are briefly described.",
"title": ""
},
{
"docid": "f8a32f8ccbc14ce1f4e4f5029ef122b8",
"text": "Content-based image retrieval (CBIR) is one of the most important applications of computer vision. In recent years, there have been many important advances in the development of CBIR systems, especially Convolutional Neural Networks (CNNs) and other deep-learning techniques. On the other hand, current CNN-based CBIR systems suffer from high computational complexity of CNNs. This problem becomes more severe as mobile applications become more and more popular. The current practice is to deploy the entire CBIR systems on the server side while the client side only serves as an image provider. This architecture can increase the computational burden on the server side, which needs to process thousands of requests per second. Moreover, sending images have the potential of personal information leakage. As the need of mobile search expands, concerns about privacy are growing. In this article, we propose a fast image search framework, named DeepSearch, which makes complex image search based on CNNs feasible on mobile phones. To implement the huge computation of CNN models, we present a tensor Block Term Decomposition (BTD) approach as well as a nonlinear response reconstruction method to accelerate the CNNs involving in object detection and feature extraction. The extensive experiments on the ImageNet dataset and Alibaba Large-scale Image Search Challenge dataset show that the proposed accelerating approach BTD can significantly speed up the CNN models and further makes CNN-based image search practical on common smart phones.",
"title": ""
},
{
"docid": "f6ec2ee334708863461f5715483b6540",
"text": "Extracting useful entities and attribute values from illicit domains such as human tra cking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have ‘long tails’ and su↵er from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domainspecific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18% F-Measure on five annotated sets of real-world human tra cking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be e ciently bootstrapped even in a serial computing environment.",
"title": ""
},
{
"docid": "b19cbe5e99f2edb701ba22faa7406073",
"text": "There are many wireless monitoring and control applications for industrial and home markets which require longer battery life, lower data rates and less complexity than available from existing wireless standards. These standards provide higher data rates at the expense of power consumption, application complexity and cost. What these markets need, in many cases, is a standardsbased wireless technology having the performance characteristics that closely meet the requirements for reliability, security, low power and low cost. This standards-based, interoperable wireless technology will address the unique needs of low data rate wireless control and sensor-based networks.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "c7b2ada500bf543b5f3bcc42d504d888",
"text": "This paper proposes a novel passive technique for the collection of microwave images. A compact component is developed that passively codes and sums the waves received by an antenna array to which it is connected, and produces a unique signal that contains all of the scene information. This technique of passive multiplexing simplifies the microwave reception chains for radar and beamforming systems (whose complexity and cost highly increase with the number of antennas) and does not require any active elements to achieve beamsteering. The preservation of the waveforms is ensured using orthogonal codes supplied by the propagation through the component's uncorrelated channels. Here we show a multiplexing technique in the physical layer that, besides being compact and passive, is compatible with all ultrawideband antennas, enabling its implementation in various fields.",
"title": ""
},
{
"docid": "f877a019708515184417a23f6052c77b",
"text": "Political parties play a vital role in democracies by linking citizens to their representatives. Nonetheless, a longstanding concern is that partisan identification slants decision-making. Citizens may support (oppose) policies that they would otherwise oppose (support) in the absence of an endorsement from a political party—this is due in large part to what is called partisan motivated reasoning where individuals interpret information through the lens of their party commitment. We explore partisan motivated reasoning in a survey experiment focusing on support for an energy law. We identify two politically relevant factors that condition partisan motivated reasoning: (1) an explicit inducement to form an ‘‘accurate’’ opinion, and (2) cross-partisan, but not consensus, bipartisan support for the law. We further provide evidence of how partisan motivated reasoning works psychologically and affects opinion strength. We conclude by discussing the implications of our results for understanding opinion formation and the overall quality of citizens’ opinions.",
"title": ""
},
{
"docid": "a61ae3623a0ba25e38828f3fe225a633",
"text": "Manufacturers always face cost-reduction and efficiency challenges in their operations. Industries require improvement in Production Lead Times, costs and customer service levels to survive. Because of this, companies have become more customers focused. The result is that companies have been putting in significant effort to improve their efficiency. In this paper Value Stream Mapping (VSM) tool is used in bearing manufacturing industry by focusing both on processes and their cycle times for a product UC208 INNER which is used in plumber block. In order to use the value stream mapping, relevant data has been collected and analyzed. After collecting the data customer need was identified. Current state map was draw by defining the resources and activities needed to manufacture, deliver the product. The study of current state map shows the areas for improvement and identifying the different types of wastes. From the current state map, it was noticeable that Annealing and CNC Machining processing have higher cycle time and work in process. The lean principles and techniques implemented or suggested and future state map was created and the total lead time was reduced from 7.3 days to 3.8 days. The WIP at each work station has also been reduced. The production lead time was reduced from 409 seconds to 344 seconds.",
"title": ""
},
{
"docid": "3fd551696803695056dd759d8f172779",
"text": "The aim of this research essay is to examine the structural nature of theory in Information Systems. Despite the impor tance of theory, questions relating to its form and structure are neglected in comparison with questions relating to episte mology. The essay addresses issues of causality, explanation, prediction, and generalization that underlie an understanding of theory. A taxonomy is proposed that classifies information systems theories with respect to the manner in which four central goals are addressed: analysis, explanation, predic tion, and prescription. Five interrelated types of theory are distinguished: (I) theory for analyzing, (2) theory for ex plaining, (3) theory for predicting, (4) theory for explaining and predicting, and (5) theory for design and action. Examples illustrate the nature of each theory type. The appli cability of the taxonomy is demonstrated by classifying a sample of journal articles. The paper contributes by showing that multiple views of theory exist and by exposing the assumptions underlying different viewpoints. In addition, it is suggested that the type of theory under development can influence the choice of an epistemological approach. Support Allen Lee was the accepting senior editor for this paper. M. Lynne Markus, Michael D. Myers, and Robert W. Zmud served as reviewers. is given for the legitimacy and value of each theory type. The building of integrated bodies of theory that encompass all theory types is advocated.",
"title": ""
},
{
"docid": "64c06bffe4aeff54fbae9d87370e552c",
"text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.",
"title": ""
},
{
"docid": "7b3ee7a7e13a4f830e589545196ca8dc",
"text": "A new model of contour extraction and perceptual grouping in the primary visual cortex is presented and discussed. It differs from previous models since it incorporates four main mechanisms, according to recent physiological data: a feed-forward input from the lateral geniculate nucleus, characterized by Gabor elongated receptive fields; an inhibitory feed-forward input, maximally oriented in the orthogonal direction of the target cell, which suppresses non-optimal stimuli and warrants contrast invariance; an excitatory cortical feedback, which respects co-axial and co-modularity criteria; and a long-range isotropic feedback inhibition. Model behavior has been tested on artificial images with contours of different curvatures, in the presence of considerable noise or in the presence of broken contours, and on a few real images. A sensitivity analysis has also been performed on the role of intracortical synapses. Results show that the model can extract correct contours within acceptable time from image presentation (30-40 ms). The feed-forward input plays a major role to set an initial correct bias for the subsequent feedback and to ensure contrast-invariance. Long-range inhibition is essential to suppress noise, but it may suppress small contours due to excessive competition with greater contours. Cortical excitation sharpens the initial bias and improves saliency of the contours. Model results support the idea that contour extraction is one the primary steps in the visual processing stream, and that local processing in V1 is able to solve this task even in difficult conditions, without the participation of higher visual centers.",
"title": ""
}
] |
scidocsrr
|
784b3274c26a7ce84049cd33febd2781
|
Antecedents of the adoption of online games technologies: The study of adolescent behavior in playing online games
|
[
{
"docid": "34f0a6e303055fc9cdefa52645c27ed5",
"text": "Purpose – The purpose of this paper is to identify the factors that influence people to play socially interactive games on mobile devices. Based on network externalities and theory of uses and gratifications (U&G), it seeks to provide direction for further academic research on this timely topic. Design/methodology/approach – Based on 237 valid responses collected from online questionnaires, structural equation modeling technology was employed to examine the research model. Findings – The results reveal that both network externalities and individual gratifications significantly influence the intention to play social games on mobile devices. Time flexibility, however, which is one of the mobile device features, appears to contribute relatively little to the intention to play mobile social games. Originality/value – This research successfully applies a combination of network externalities theory and U&G theory to investigate the antecedents of players’ intentions to play mobile social games. This study is able to provide a better understanding of how two dimensions – perceived number of users/peers and individual gratification – influence mobile game playing, an insight that has not been examined previously in the mobile apps literature.",
"title": ""
}
] |
[
{
"docid": "f68b11af8958117f75fc82c40c51c395",
"text": "Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world. While aleatory uncertainty refers to the inherent randomness in nature, derived from natural variability of the physical world (e.g., random show of a flipped coin), epistemic uncertainty origins from human's lack of knowledge of the physical world, as well as ability of measuring and modeling the physical world (e.g., computation of the distance between two cities). Different kinds of uncertainty call for different handling methods. Aggarwal, Yu, Sarma, and Zhang et al. have made good surveys on uncertain database management based on the probability theory. This paper reviews multidisciplinary uncertainty processing activities in diverse fields. Beyond the dominant probability theory and fuzzy theory, we also review information-gap theory and recently derived uncertainty theory. Practices of these uncertainty handling theories in the domains of economics, engineering, ecology, and information sciences are also described. It is our hope that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.",
"title": ""
},
{
"docid": "b4f2cbda004ab3c0849f0fe1775c2a7a",
"text": "This research investigates the influence of religious preference and practice on the use of contraception. Much of earlier research examines the level of religiosity on sexual activity. This research extends this reasoning by suggesting that peer group effects create a willingness to mask the level of sexuality through the use of contraception. While it is understood that certain religions, that is, Catholicism does not condone the use of contraceptives, this research finds that Catholics are more likely to use certain methods of contraception than other religious groups. With data on contraceptive use from the Center for Disease Control’s Family Growth Survey, a likelihood probability model is employed to investigate the impact religious affiliation on contraception use. Findings suggest a preference for methods that ensure non-pregnancy while preventing feelings of shame and condemnation in their religious communities.",
"title": ""
},
{
"docid": "b1bb036fb8df8174d4c6b27480c2dc89",
"text": "Over the past 5 years numerous reports have confirmed and replicated the specific brain cooling and thermal window predictions derived from the thermoregulatory theory of yawning, and no study has found evidence contrary to these findings. Here we review the comparative research supporting this model of yawning among homeotherms, while highlighting a recent report showing how the expression of contagious yawning in humans is altered by seasonal climate variation. The fact that yawning is constrained to a thermal window of ambient temperature provides unique and compelling support in favor of this theory. Heretofore, no existing alternative hypothesis of yawning can explain these results, which have important implications for understanding the potential functional role of this behavior, both physiologically and socially, in humans and other animals. In discussion we stress the broader applications of this work in clinical settings, and counter the various criticisms of this theory.",
"title": ""
},
{
"docid": "50a89110795314b5610fabeaf41f0e40",
"text": "People are capable of robust evaluations of their decisions: they are often aware of their mistakes even without explicit feedback, and report levels of confidence in their decisions that correlate with objective performance. These metacognitive abilities help people to avoid making the same mistakes twice, and to avoid overcommitting time or resources to decisions that are based on unreliable evidence. In this review, we consider progress in characterizing the neural and mechanistic basis of these related aspects of metacognition-confidence judgements and error monitoring-and identify crucial points of convergence between methods and theories in the two fields. This convergence suggests that common principles govern metacognitive judgements of confidence and accuracy; in particular, a shared reliance on post-decisional processing within the systems responsible for the initial decision. However, research in both fields has focused rather narrowly on simple, discrete decisions-reflecting the correspondingly restricted focus of current models of the decision process itself-raising doubts about the degree to which discovered principles will scale up to explain metacognitive evaluation of real-world decisions and actions that are fluid, temporally extended, and embedded in the broader context of evolving behavioural goals.",
"title": ""
},
{
"docid": "c85ee4139239b17d98b0d77836e00b72",
"text": "We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.",
"title": ""
},
{
"docid": "4ba595a34ae03c1724d434f0cbdbf663",
"text": "Studies on the development of protocols for the clonal propagation, through somatic embryogenesis, of coconut have been reported for the past three decades, mostly using inflorescence explants, but with low reproducibility and efficiency. Recent improvements in these respects have been achieved using plumular explants. Here, we report a developmental study of embryogenesis in plumule explants using histological techniques in order to extend our understanding of this process. Coconut plumule explants consisted of the shoot meristem including leaf primordia. At day 15 of culture, the explants did not show any apparent growth; however, a transverse section showed noticeable growth of the plumular leaves forming a ring around the inner leaves and the shoot meristem, which did not show any apparent growth. At day 30, the shoot meristem started to grow and the plumular leaves continued growing., At day 45, the explants were still compact and white in color, but showed partial dedifferentiation and meristematic cell proliferation leading to the development of callus structures with a translucent appearance. After 60 d, these meristematic cells evolved into nodular structures. At day 75, the nodular structures became pearly globular structures on the surface of translucent structures, from which somatic embryos eventually formed and presented well-developed root and caulinar meristems. These results allow better insights and an integrated view into the somatic embryogenesis process in coconut plumule explants, which could be helpful for future studies that eventually could lead us to improved control of the process and greater efficiency of somatic embryo and plantlet formation.",
"title": ""
},
{
"docid": "9e9f967d9e19ab88830a91290e7ac6e7",
"text": "Planning for the information systems in an organization generally has not been closely related to the overall strategic planning processes through which the organization prepares for its future. An M/S strategic planning process is conceptualized and illustrated as one which /inks the. organization’s \"strategy set\" tb an MIS \"strategy set. \" The literature of management information systems (MIS) concentrates largely on the nature and structure of MIS’s and on processes for designing and developing such systems. The idea of \"planning for the MIS\" is usually treated as either one of developing the need and the general design concept for such a system, or in the context of project planning for the MIS development effort. However, strategic planning for the informational needs of the organization is both feasible and necessary if the MIS is to support the basic purposes and goals of the organization. Indeed, one of the possible explanations [6] for the failure of many MIS’s i~-that they have been designed from the same \"bottom up\" point of view that characterized the development of the data processing systems of an earlier era. Such design approaches primarily reflect the pursuit of efficiency, such as through cost savings, rather than the pursuit of greater organizational effectiveness.1 The modern view of an MIS as an organizational decision support system is inconsistent with the design/development approaches which are appropriate for data processing. The organization’s operating efficiency is but one aspect for consideration in management decision making. The achievement of greater organizational effectiveness is the paramount consideration in most of the management decisions which the MIS is to support; it also must be of paramount importance in the design of the MIS. There is an intrinsic linkage of the decisionsupporting MIS to the organization’s purpose, objectives, and strategy. While.this conclusion may appear to be straightforward, it has not been operationalized as a part of MIS design methodology. There are those who argue that the MIS designer cannot hope to get involved in such things as organizational missions, objectives, and strategies, since they are clearly beyond his domain of authority. This article describes an operationally feasible approach for identifying and utilizing the elements of the organization’s \"strategy set\" to plan for the MIS. Whether or not written state",
"title": ""
},
{
"docid": "ada7b43edc18b321c57a978d7a3859ae",
"text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.",
"title": ""
},
{
"docid": "d509cb384ecddafa0c4f866882af2c77",
"text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.",
"title": ""
},
{
"docid": "410d4b0eb8c60517506b0d451cf288ba",
"text": "Prepositional phrases (PPs) express crucial information that knowledge base construction methods need to extract. However, PPs are a major source of syntactic ambiguity and still pose problems in parsing. We present a method for resolving ambiguities arising from PPs, making extensive use of semantic knowledge from various resources. As training data, we use both labeled and unlabeled data, utilizing an expectation maximization algorithm for parameter estimation. Experiments show that our method yields improvements over existing methods including a state of the art dependency parser.",
"title": ""
},
{
"docid": "1d15a6b19d4b36fec96afc0e5f55cd25",
"text": "Image captioning has been recently gaining a lot of attention thanks to the impressive achievements shown by deep captioning architectures, which combine Convolutional Neural Networks to extract image representations and Recurrent Neural Networks to generate the corresponding captions. At the same time, a significant research effort has been dedicated to the development of saliency prediction models, which can predict human eye fixations. Even though saliency information could be useful to condition an image captioning architecture, by providing an indication of what is salient and what is not, research is still struggling to incorporate these two techniques. In this work, we propose an image captioning approach in which a generative recurrent neural network can focus on different parts of the input image during the generation of the caption, by exploiting the conditioning given by a saliency prediction model on which parts of the image are salient and which are contextual. We show, through extensive quantitative and qualitative experiments on large-scale datasets, that our model achieves superior performance with respect to captioning baselines with and without saliency and to different state-of-the-art approaches combining saliency and captioning.",
"title": ""
},
{
"docid": "7a05f2c12c3db9978807eb7c082db087",
"text": "This paper discusses the importance, the complexity and the challenges of mapping mobile robot’s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.",
"title": ""
},
{
"docid": "b07f858d08f40f61f3ed418674948f12",
"text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.",
"title": ""
},
{
"docid": "b1810c928902c96784b922c304079641",
"text": "The rapid proliferation of wireless networks and mobile computing applications has changed the landscape of network security. The traditional way of protecting networks with firewalls and encryption software is no longer sufficient and effective. We need to search for new architecture and mechanisms to protect the wireless networks and mobile computing application. In this paper, we examine the vulnerabilities of wireless networks and argue that we must include intrusion detection in the security architecture for mobile computing environment. We have developed such an architecture and evaluated a key mechanism in this architecture, anomaly detection for mobile ad-hoc network, through simulation experiments.",
"title": ""
},
{
"docid": "7b2d1af8db446019ba45511098dddefe",
"text": "This article proposes a novel online portfolio selection strategy named “Passive Aggressive Mean Reversion” (PAMR). Unlike traditional trend following approaches, the proposed approach relies upon the mean reversion relation of financial markets. Equipped with online passive aggressive learning technique from machine learning, the proposed portfolio selection strategy can effectively exploit the mean reversion property of markets. By analyzing PAMR’s update scheme, we find that it nicely trades off between portfolio return and volatility risk and reflects the mean reversion trading principle. We also present several variants of PAMR algorithm, including a mixture algorithm which mixes PAMR and other strategies. We conduct extensive numerical experiments to evaluate the empirical performance of the proposed algorithms on various real datasets. The encouraging results show that in most cases the proposed PAMR strategy outperforms all benchmarks and almost all state-of-the-art portfolio selection strategies under various performance metrics. In addition to its superior performance, the proposed PAMR runs extremely fast and thus is very suitable for real-life online trading applications. The experimental testbed including source codes and data sets is available at http://www.cais.ntu.edu.sg/~chhoi/PAMR/ .",
"title": ""
},
{
"docid": "7f52960fb76c3c697ef66ffee91b13ee",
"text": "The aim of this work was to explore the feasibility of combining hot melt extrusion (HME) with 3D printing (3DP) technology, with a view to producing different shaped tablets which would be otherwise difficult to produce using traditional methods. A filament extruder was used to obtain approx. 4% paracetamol loaded filaments of polyvinyl alcohol with characteristics suitable for use in fused-deposition modelling 3DP. Five different tablet geometries were successfully 3D-printed-cube, pyramid, cylinder, sphere and torus. The printing process did not affect the stability of the drug. Drug release from the tablets was not dependent on the surface area but instead on surface area to volume ratio, indicating the influence that geometrical shape has on drug release. An erosion-mediated process controlled drug release. This work has demonstrated the potential of 3DP to manufacture tablet shapes of different geometries, many of which would be challenging to manufacture by powder compaction.",
"title": ""
},
{
"docid": "17cb27030abc5054b8f51256bdee346a",
"text": "Purpose – This paper seeks to define and describe agile project management using the Scrum methodology as a method for more effectively managing and completing projects. Design/methodology/approach – This paper provides a general overview and introduction to the concepts of agile project management and the Scrum methodology in particular. Findings – Agile project management using the Scrum methodology allows project teams to manage digital library projects more effectively by decreasing the amount of overhead dedicated to managing the project. Using an iterative process of continuous review and short-design time frames, the project team is better able to quickly adapt projects to rapidly evolving environments in which systems will be used. Originality/value – This paper fills a gap in the digital library project management literature by providing an overview of agile project management methods.",
"title": ""
},
{
"docid": "9b9425132e89d271ed6baa0dbc16b941",
"text": "Although personalized recommendation has been investigated for decades, the wide adoption of Latent Factor Models (LFM) has made the explainability of recommendations a critical issue to both the research community and practical application of recommender systems. For example, in many practical systems the algorithm just provides a personalized item recommendation list to the users, without persuasive personalized explanation about why such an item is recommended while another is not. Unexplainable recommendations introduce negative effects to the trustworthiness of recommender systems, and thus affect the effectiveness of recommendation engines. In this work, we investigate explainable recommendation in aspects of data explainability, model explainability, and result explainability, and the main contributions are as follows: 1. Data Explainability: We propose Localized Matrix Factorization (LMF) framework based Bordered Block Diagonal Form (BBDF) matrices, and further applied this technique for parallelized matrix factorization. 2. Model Explainability: We propose Explicit Factor Models (EFM) based on phrase-level sentiment analysis, as well as dynamic user preference modeling based on time series analysis. In this work, we extract product features and user opinions towards different features from large-scale user textual reviews based on phrase-level sentiment analysis techniques, and introduce the EFM approach for explainable model learning and recommendation. 3. Economic Explainability: We propose the Total Surplus Maximization (TSM) framework for personalized recommendation, as well as the model specification in different types of online applications. Based on basic economic concepts, we provide the definitions of utility, cost, and surplus in the application scenario of Web services, and propose the general framework of web total surplus calculation and maximization.",
"title": ""
},
{
"docid": "7b2bf230751b29044ecf36efc3961bf5",
"text": "A double inverted pendulum plant has been in the domain of control researchers as an established model for studies on stability. The stability of such as a system taking the linearized plant dynamics has yielded satisfactory results by many researchers using classical control techniques. The established model that is analyzed as part of this work was tested under the influence of time delay, where the controller was fine tuned using a BAT algorithm taking into considering the fitness function of square of error. This proposed method gave results which were better when compared without time delay wherein the calculated values indicated the issues when incorporating time delay.",
"title": ""
},
{
"docid": "ce31be5bfeb05a30c5479a3192d20f93",
"text": "Network embedding represents nodes in a continuous vector space and preserves structure information from the Network. Existing methods usually adopt a “one-size-fits-all” approach when concerning multi-scale structure information, such as firstand second-order proximity of nodes, ignoring the fact that different scales play different roles in the embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: 1) Attention-based autoencoder effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training. 2) An adversarial regularization guides the autoencoder learn robust representations by matching the posterior distribution of the latent embeddings to given prior distribution. This is the first attempt to introduce attention mechanisms to multi-scale network embedding. Experimental results on realworld networks show that our learned attention parameters are different for every network and the proposed approach outperforms existing state-ofthe-art approaches for network embedding.",
"title": ""
}
] |
scidocsrr
|
90cd8c386fa424bedca4491052232790
|
A simple probabilistic deep generative model for learning generalizable disentangled representations from grouped data
|
[
{
"docid": "98d3dddfca32c442f6b7c0a6da57e690",
"text": "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce β-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter β that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that β-VAE with appropriately tuned β > 1 qualitatively outperforms VAE (β = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, β-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter β, which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.",
"title": ""
},
{
"docid": "ebee9e3ab7fe1a0eb5da28793874e309",
"text": "We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. Examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. In both instances, the intra-class diversity is the source of the unspecified factors of variation: each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. Learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. Suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. This source of alignment allows us to solve our task using existing methods. However, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. We address the problem of disentaglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. Both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra-class variabilities.",
"title": ""
},
{
"docid": "43f9e6edee92ddd0b9dfff885b69f64d",
"text": "In this paper, we present a scalable and exact solution for probabilistic linear discriminant analysis (PLDA). PLDA is a probabilistic model that has been shown to provide state-of-the-art performance for both face and speaker recognition. However, it has one major drawback: At training time estimating the latent variables requires the inversion and storage of a matrix whose size grows quadratically with the number of samples for the identity (class). To date, two approaches have been taken to deal with this problem, to 1) use an exact solution that calculates this large matrix and is obviously not scalable with the number of samples or 2) derive a variational approximation to the problem. We present a scalable derivation which is theoretically equivalent to the previous nonscalable solution and thus obviates the need for a variational approximation. Experimentally, we demonstrate the efficacy of our approach in two ways. First, on labeled faces in the wild, we illustrate the equivalence of our scalable implementation with previously published work. Second, on the large Multi-PIE database, we illustrate the gain in performance when using more training samples per identity (class), which is made possible by the proposed scalable formulation of PLDA.",
"title": ""
}
] |
[
{
"docid": "13d9b338b83a5fcf75f74607bf7428a7",
"text": "We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies, including both linear and nonlinear ones. We implement the D-NTM with both continuous and discrete read and write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU controller. We provide extensive analysis of our model and compare different variations of neural Turing machines on this task. We show that our model outperforms long short-term memory and NTM variants. We provide further experimental results on the sequential MNIST, Stanford Natural Language Inference, associative recall, and copy tasks.",
"title": ""
},
{
"docid": "4fb391446ca62dc2aa52ce905d92b036",
"text": "The frequency and intensity of natural disasters has increased significantly in recent decades, and this trend is expected to continue. Hence, understanding and predicting human evacuation behavior and mobility will play a vital role in planning effective humanitarian relief, disaster management, and long-term societal reconstruction. However, existing models are shallow models, and it is difficult to apply them for understanding the “deep knowledge” of human mobility. Therefore, in this study, we collect big and heterogeneous data (e.g., GPS records of 1.6 million users over 3 years, data on earthquakes that have occurred in Japan over 4 years, news report data, and transportation network data), and we build an intelligent system, namely, DeepMob, for understanding and predicting human evacuation behavior and mobility following different types of natural disasters. The key component of DeepMob is based on a deep learning architecture that aims to understand the basic laws that govern human behavior and mobility following natural disasters, from big and heterogeneous data. Furthermore, based on the deep learning model, DeepMob can accurately predict or simulate a person’s future evacuation behaviors or evacuation routes under different disaster conditions. Experimental results and validations demonstrate the efficiency and superior performance of our system, and suggest that human mobility following disasters may be predicted and simulated more easily than previously thought.",
"title": ""
},
{
"docid": "60cbe9d8e1cbc5dd87c8f438cc766a0b",
"text": "Drosophila mounts a potent host defence when challenged by various microorganisms. Analysis of this defence by molecular genetics has now provided a global picture of the mechanisms by which this insect senses infection, discriminates between various classes of microorganisms and induces the production of effector molecules, among which antimicrobial peptides are prominent. An unexpected result of these studies was the discovery that most of the genes involved in the Drosophila host defence are homologous or very similar to genes implicated in mammalian innate immune defences. Recent progress in research on Drosophila immune defence provides evidence for similarities and differences between Drosophila immune responses and mammalian innate immunity.",
"title": ""
},
{
"docid": "d98fce90097705f466382e8bcb0a39b1",
"text": "This paper presents a novel vehicular adaptive cruise control (ACC) system that can comprehensively address issues of tracking capability, fuel economy and driver desired response. A hierarchical control architecture is utilized in which a lower controller compensates for nonlinear vehicle dynamics and enables tracking of desired acceleration. The upper controller is synthesized under the framework of model predictive control (MPC) theory. A quadratic cost function is developed that considers the contradictions between minimal tracking error, low fuel consumption and accordance with driver dynamic car-following characteristics while driver longitudinal ride comfort, driver permissible tracking range and rear-end safety are formulated as linear constraints. Employing a constraint softening method to avoid computing infeasibility, an optimal control law is numerically calculated using a quadratic programming algorithm. Detailed simulations with a heavy duty truck show that the developed ACC system provides significant benefits in terms of fuel economy and tracking capability while at the same time also satisfying driver desired car following characteristics.",
"title": ""
},
{
"docid": "07e9b961a1196665538d89b60a30a7d1",
"text": "The problem of anomaly detection in time series has received a lot of attention in the past two decades. However, existing techniques cannot locate where the anomalies are within anomalous time series, or they require users to provide the length of potential anomalies. To address these limitations, we propose a self-learning online anomaly detection algorithm that automatically identifies anomalous time series, as well as the exact locations where the anomalies occur in the detected time series. In addition, for multivariate time series, it is difficult to detect anomalies due to the following challenges. First, anomalies may occur in only a subset of dimensions (variables). Second, the locations and lengths of anomalous subsequences may be different in different dimensions. Third, some anomalies may look normal in each individual dimension but different with combinations of dimensions. To mitigate these problems, we introduce a multivariate anomaly detection algorithm which detects anomalies and identifies the dimensions and locations of the anomalous subsequences. We evaluate our approaches on several real-world datasets, including two CPU manufacturing data from Intel. We demonstrate that our approach can successfully detect the correct anomalies without requiring any prior knowledge about the data.",
"title": ""
},
{
"docid": "bec66d4d576f2c5c5643ffe4b72ab353",
"text": "Many cities suffer from noise pollution, which compromises people's working efficiency and even mental health. New York City (NYC) has opened a platform, entitled 311, to allow people to complain about the city's issues by using a mobile app or making a phone call; noise is the third largest category of complaints in the 311 data. As each complaint about noises is associated with a location, a time stamp, and a fine-grained noise category, such as \"Loud Music\" or \"Construction\", the data is actually a result of \"human as a sensor\" and \"crowd sensing\", containing rich human intelligence that can help diagnose urban noises. In this paper we infer the fine-grained noise situation (consisting of a noise pollution indicator and the composition of noises) of different times of day for each region of NYC, by using the 311 complaint data together with social media, road network data, and Points of Interests (POIs). We model the noise situation of NYC with a three dimension tensor, where the three dimensions stand for regions, noise categories, and time slots, respectively. Supplementing the missing entries of the tensor through a context-aware tensor decomposition approach, we recover the noise situation throughout NYC. The information can inform people and officials' decision making. We evaluate our method with four real datasets, verifying the advantages of our method beyond four baselines, such as the interpolation-based approach.",
"title": ""
},
{
"docid": "5d40cae84395cc94d68bd4352383d66b",
"text": "Scalable High Efficiency Video Coding (SHVC) is the extension of the High Efficiency Video Coding (HEVC). This standard is developed to ameliorate the coding efficiency for the spatial and quality scalability. In this paper, we investigate a survey for SHVC extension. We describe also its types and explain the different additional coding tools that further improve the Enhancement Layer (EL) coding efficiency. Furthermore, we assess through experimental results the performance of the SHVC for different coding configurations. The effectiveness of the SHVC was demonstrated, using two layers, by comparing its coding adequacy compared to simulcast configuration and HEVC for enhancement layer using HM16 for several test sequences and coding conditions.",
"title": ""
},
{
"docid": "466b1889684abb52f2d83d45fbabc4bb",
"text": "In this study, we focused on developing a novel 3D Thinning algorithm to extract one-voxel wide skeleton from various 3D objects aiming at preserving the topological information. The 3D Thinning algorithm was testified on computer-generated and real 3D reconstructed image sets acquired from TEMT and compared with other existing 3D Thinning algorithms. It is found that the algorithm has conserved medial axes and simultaneously topologies very well, demonstrating many advantages over the existing technologies. They are versatile, rigorous, efficient and rotation invariant.",
"title": ""
},
{
"docid": "b753eb752d4f87dbff82d77e8417f389",
"text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:",
"title": ""
},
{
"docid": "2cbd47c2e7a1f68bd84d18413db26ea3",
"text": "Horizontal gene transfer (HGT) refers to the acquisition of foreign genes by organisms. The occurrence of HGT among bacteria in the environment is assumed to have implications in the risk assessment of genetically modified bacteria which are released into the environment. First, introduced genetic sequences from a genetically modified bacterium could be transferred to indigenous micro-organisms and alter their genome and subsequently their ecological niche. Second, the genetically modified bacterium released into the environment might capture mobile genetic elements (MGE) from indigenous micro-organisms which could extend its ecological potential. Thus, for a risk assessment it is important to understand the extent of HGT and genome plasticity of bacteria in the environment. This review summarizes the present state of knowledge on HGT between bacteria as a crucial mechanism contributing to bacterial adaptability and diversity. In view of the use of GM crops and microbes in agricultural settings, in this mini-review we focus particularly on the presence and role of MGE in soil and plant-associated bacteria and the factors affecting gene transfer.",
"title": ""
},
{
"docid": "db9ab90f56a5762ebf6729ffc802a02a",
"text": "In this paper we present a novel approach to music analysis, in which a grammar is automatically generated explaining a musical work’s structure. The proposed method is predicated on the hypothesis that the shortest possible grammar provides a model of the musical structure which is a good representation of the composer’s intent. The effectiveness of our approach is demonstrated by comparison of the results with previously-published expert analysis; our automated approach produces results comparable to human annotation. We also illustrate the power of our approach by showing that it is able to locate errors in scores, such as introduced by OMR or human transcription. Further, our approach provides a novel mechanism for intuitive high-level editing and creative transformation of music. A wide range of other possible applications exists, including automatic summarization and simplification; estimation of musical complexity and similarity, and plagiarism detection.",
"title": ""
},
{
"docid": "914c985dc02edd09f0ee27b75ecee6a4",
"text": "Whether the development of face recognition abilities truly reflects changes in how faces, specifically, are perceived, or rather can be attributed to more general perceptual or cognitive development, is debated. Event-related potential (ERP) recordings on the scalp offer promise for this issue because they allow brain responses to complex visual stimuli to be relatively well isolated from other sensory, cognitive and motor processes. ERP studies in 5- to 16-year-old children report large age-related changes in amplitude, latency (decreases) and topographical distribution of the early visual components, the P1 and the occipito-temporal N170. To test the face specificity of these effects, we recorded high-density ERPs to pictures of faces, cars, and their phase-scrambled versions from 72 children between the ages of 4 and 17, and a group of adults. We found that none of the previously reported age-dependent changes in amplitude, latency or topography of the P1 or N170 were specific to faces. Most importantly, when we controlled for age-related variations of the P1, the N170 appeared remarkably similar in amplitude and topography across development, with much smaller age-related decreases in latencies than previously reported. At all ages the N170 showed equivalent face-sensitivity: it had the same topography and right hemisphere dominance, it was absent for meaningless (scrambled) stimuli, and larger and earlier for faces than cars. The data also illustrate the large amount of inter-individual and inter-trial variance in young children's data, which causes the N170 to merge with a later component, the N250, in grand-averaged data. Based on our observations, we suggest that the previously reported \"bi-fid\" N170 of young children is in fact the N250. Overall, our data indicate that the electrophysiological markers of face-sensitive perceptual processes are present from 4 years of age and do not appear to change throughout development.",
"title": ""
},
{
"docid": "3cd32b304b7e5b4bc102a5e38ae1f488",
"text": "With the growing emphasis on reuse software development process moves toward component based software design As a result there is a need for modeling ap proaches that are capable of considering the architecture of the software and es timating the reliability by taking into account the interactions between the com ponents the utilization of the components and the reliabilities of the components and of their interfaces with other components This paper details the state of the architecture based approach to reliability assessment of component based software and describes how it can be used to examine software behavior right from the de sign stage to implementation and nal deployment First the common requirements of the architecture based models are identi ed and the classi cation is proposed Then the key models in each class are described in detail and the relation among them is discussed A critical analysis of underlying assumptions limitations and applicability of these models is provided which should be helpful in determining the directions for future research",
"title": ""
},
{
"docid": "2aeaffcd6af02f0c61f4cf998a3e630c",
"text": "This paper reports on experiments to improve the Optical Character Recognition (ocr) quality of historical text as a preliminary step in text mining. We analyse the quality of ocred text compared to a gold standard and show how it can be improved by performing two automatic correction steps. We also demonstrate the impact this can have on named entity recognition in a preliminary extrinsic evaluation. This work was performed as part of the Trading Consequences project which is focussed on text mining of historical documents for the study of nineteenth century trade in the British Empire.",
"title": ""
},
{
"docid": "19b16abf5ec7efe971008291f38de4d4",
"text": "Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the ℓ2-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.",
"title": ""
},
{
"docid": "aa03d917910a3da1f22ceea8f5b8d1c8",
"text": "We train a language-universal dependency parser on a multilingual collection of treebanks. The parsing model uses multilingual word embeddings alongside learned and specified typological information, enabling generalization based on linguistic universals and based on typological similarities. We evaluate our parser’s performance on languages in the training set as well as on the unsupervised scenario where the target language has no trees in the training data, and find that multilingual training outperforms standard supervised training on a single language, and that generalization to unseen languages is competitive with existing model-transfer approaches.",
"title": ""
},
{
"docid": "27128d582432a2d76df88bab16f9f835",
"text": "During the last twenty years genetic algorithms [6] and other evolutionary algorithms [11] have been applied to many hard problems with very good results. However, for many constrained problems the results were mixed. It seems, that (in general) there has not been any single accepted strategy to deal with constrained problems: most researchers used some ad-hoc methods for handling problem specific constraints. The reason for this phenomena might be that there is an experimental evidence [10] that incorporation of the problem specific knowledge (i.e., the problem's constraints) into the evolutionary algorithm (i.e., into its chromosomal structures and genetic operators) enhances its performance in a significant way. The constraint-handling techniques for evolutionary algorithms can be grouped into a few categories. One way of dealing with candidates that violate the constraints is to generate potential solutions without considering the constraints and then to penalize them by decreasing the \"goodness\" of the evaluation function. In other words, a constrained problem is transformed to an unconstrained one by associating a penalty with all constraint violations; these penalties are included in the function evaluation. Of course, there are a variety of possible penalty functions which can be applied. Some penalty functions assign a constant as a penalty measure. Other penalty functions depend on the degree of violation: the larger violation is, the greater penalty is imposed (however, the growth of the function can be logarithmic, linear, quadratic, exponential, etc. with respect to the size of the violation). Each of these categories of penalty functions has its own disadvantages; in [4] Davis wrote:",
"title": ""
},
{
"docid": "ff2894c10a19212668ce4e6b2750b22d",
"text": "Three-phase voltage source converters (VSCs) are commonly used as power flow interface in ac/dc hybrid power systems. The ac power grid suffers from unpredictable short-circuit faults and power flow fluctuations, causing undesirable grid voltage dips. The voltage dips may last for a short time or a long duration, and vary the working conditions of VSCs. Due to their nonlinear characteristics, VSCs may enter abnormal operating mode in response to voltage dips. In this paper, the transient response of three-phase VSCs under practical grid voltage dips is studied and a catastrophic bifurcation phenomenon is identified in the system. The converter will exhibit an irreversible instability after the dips. The expanded magnitude of ac reactive current may cause catastrophic consequence for the system. A full-order eigenvalue analysis and a reduced-order mixed-potential-theory-based analysis are adopted to reveal the physical origin of the large-signal instability phenomenon. The key parameters of the system are identified and the boundaries of instability are located. The bifurcation phenomenon and a set of design-oriented stability boundaries in some chosen parameter space are verified by cycle-by-cycle simulations and experimental measurement on a practical grid-connected VSC prototype.",
"title": ""
},
{
"docid": "d698d49a82829a2bb772d1c3f6c2efc5",
"text": "The concepts of Data Warehouse, Cloud Computing and Big Data have been proposed during the era of data flood. By reviewing current progresses in data warehouse studies, this paper introduces a framework to achieve better visualization for Big Data. This framework can reduce the cost of building Big Data warehouses by divide data into sub dataset and visualize them respectively. Meanwhile, basing on the powerful visualization tool of D3.js and directed by the principle of Whole-Parts, current data can be presented to users from different dimensions by different rich statistics graphics.",
"title": ""
},
{
"docid": "dd05688335b4240bbc40919870e30f39",
"text": "In this tool report, we present an overview of the Watson system, a Semantic Web search engine providing various functionalities not only to find and locate ontologies and semantic data online, but also to explore the content of these semantic documents. Beyond the simple facade of a search engine for the Semantic Web, we show that the availability of such a component brings new possibilities in terms of developing semantic applications that exploit the content of the Semantic Web. Indeed, Watson provides a set of APIs containing high level functions for finding, exploring and querying semantic data and ontologies that have been published online. Thanks to these APIs, new applications have emerged that connect activities such as ontology construction, matching, sense disambiguation and question answering to the Semantic Web, developed by our group and others. In addition, we also describe Watson as a unprecedented research platform for the study the Semantic Web, and of formalised knowledge in general.",
"title": ""
}
] |
scidocsrr
|
6d3ee2b196185ba6e9b63886f85de141
|
Enhancing First Story Detection using Word Embeddings
|
[
{
"docid": "a85c13406ddc3dc057f029ba96fdffe1",
"text": "We apply statistical machine translation (SMT) tools to generate novel paraphrases of input sentences in the same language. The system is trained on large volumes of sentence pairs automatically extracted from clustered news articles available on the World Wide Web. Alignment Error Rate (AER) is measured to gauge the quality of the resulting corpus. A monotone phrasal decoder generates contextual replacements. Human evaluation shows that this system outperforms baseline paraphrase generation techniques and, in a departure from previous work, offers better coverage and scalability than the current best-of-breed paraphrasing approaches.",
"title": ""
}
] |
[
{
"docid": "3b9ab1832864eda1a67fc46d425de468",
"text": "Wind-photovoltaic hybrid system (WPHS) utilization is becoming popular due to increasing energy costs and decreasing prices of turbines and photovoltaic (PV) panels. However, prior to construction of a renewable generation station, it is necessary to determine the optimum number of PV panels and wind turbines for minimal cost during continuity of generated energy to meet the desired consumption. In fact, the traditional sizing procedures find optimum number of the PV modules and wind turbines subject to minimum cost. However, the optimum battery capacity is either not taken into account, or it is found by a full search between all probable solution spaces which requires extensive computation. In this study, a novel description of the production/consumption phenomenon is proposed, and a new sizing procedure is developed. Using this procedure, optimum battery capacity, together with optimum number of PV modules and wind turbines subject to minimum cost can be obtained with good accuracy. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3eb3ae4ac8236851b1399629b9577085",
"text": "We study the problem of troubleshooting machine learning systems that rely on analytical pipelines of distinct components. Understanding and fixing errors that arise in such integrative systems is difficult as failures can occur at multiple points in the execution workflow. Moreover, errors can propagate, become amplified or be suppressed, making blame assignment difficult. We propose a human-in-the-loop methodology which leverages human intellect for troubleshooting system failures. The approach simulates potential component fixes through human computation tasks and measures the expected improvements in the holistic behavior of the system. The method provides guidance to designers about how they can best improve the system. We demonstrate the effectiveness of the approach on an automated image captioning system that has been pressed into real-world use.",
"title": ""
},
{
"docid": "867ddbd84e8544a5c2d6f747756ca3d9",
"text": "We report a 166 W burst mode pulse fiber amplifier seeded by a Q-switched mode-locked all-fiber laser at 1064 nm based on a fiber-coupled semiconductor saturable absorber mirror. With a pump power of 230 W at 976 nm, the output corresponds to a power conversion efficiency of 74%. The repetition rate of the burst pulse is 20 kHz, the burst energy is 8.3 mJ, and the burst duration is ∼ 20 μs, which including about 800 mode-locked pulses at a repetition rate of 40 MHz and the width of the individual mode-locked pulse is measured to be 112 ps at the maximum output power. To avoid optical damage to the fiber, the initial mode-locked pulses were stretched to 72 ps by a bandwidth-limited fiber bragg grating. After a two-stage preamplifier, the pulse width was further stretched to 112 ps, which is a result of self-phase modulation of the pulse burst during the amplification.",
"title": ""
},
{
"docid": "d79a4bae5e7464d2f2acc51f1f22ccbe",
"text": "The inductance calculation and the layout optimization for spiral inductors still are research topics of actuality and very interesting especially in radio frequency integrated circuits. Our research work is fixed on the vast topics of this research area. In this effect we create a software program dedicate to dc inductance calculation and to layout optimization for spiral inductors. We use a wide range of inductors in the application made with our program; we compare our applications results with measurements results existing in the literature and with three-dimensional commercial field solver results in order to validate it. Our program is accurate enough, has a very friendly interface, is very easy to use and its running time is very sort compared with other similar programs. Since spiral inductors tolerance is generally on the order of several percent, a more accurate program is not needed in practice. The program is very useful for the spiral inductor design because it calculates the inductance of spiral inductors with a very good accuracy and also for the spiral inductor optimization, because it optimize the spiral inductor layouts in terms of technological restrictions and/or in terms of the designers' needs.",
"title": ""
},
{
"docid": "67d141b8e53e1398b6988e211d16719e",
"text": "the recent advancement of networking technology has enabled the streaming of video content over wired/wireless network to a great extent. Video streaming includes various types of video content, namely, IP television (IPTV), Video on demand (VOD), Peer-to-Peer (P2P) video sharing, Voice (and video) over IP (VoIP) etc. The consumption of the video contents has been increasing a lot these days and promises a huge potential for the network provider, content provider and device manufacturers. However, from the end user's perspective there is no universally accepted existing standard metric, which will ensure the quality of the application/utility to meet the user's desired experience. In order to fulfill this gap, a new metric, called Quality of Experience (QoE), has been proposed in numerous researches recently. Our aim in this paper is to research the evolution of the term QoE, find the influencing factors of QoE metric especially in video streaming and finally QoE modelling and methodologies in practice.",
"title": ""
},
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "92c91a8e9e5eec86f36d790dec8020e7",
"text": "Aspect-based opinion mining, which aims to extract aspects and their corresponding ratings from customers reviews, provides very useful information for customers to make purchase decisions. In the past few years several probabilistic graphical models have been proposed to address this problem, most of them based on Latent Dirichlet Allocation (LDA). While these models have a lot in common, there are some characteristics that distinguish them from each other. These fundamental differences correspond to major decisions that have been made in the design of the LDA models. While research papers typically claim that a new model outperforms the existing ones, there is normally no \"one-size-fits-all\" model. In this paper, we present a set of design guidelines for aspect-based opinion mining by discussing a series of increasingly sophisticated LDA models. We argue that these models represent the essence of the major published methods and allow us to distinguish the impact of various design decisions. We conduct extensive experiments on a very large real life dataset from Epinions.com (500K reviews) and compare the performance of different models in terms of the likelihood of the held-out test set and in terms of the accuracy of aspect identification and rating prediction.",
"title": ""
},
{
"docid": "0bd7c453279c97333e7ac6c52f7127d8",
"text": "Among various biometric modalities, signature verification remains one of the most widely used methods to authenticate the identity of an individual. Signature verification, the most important component of behavioral biometrics, has attracted significant research attention over the last three decades. Despite extensive research, the problem still remains open to research due to the variety of challenges it offers. The high intra-class variations in signatures resulting from different physical or mental states of the signer, the differences that appear with aging and the visual similarity in case of skilled forgeries etc. are only a few of the challenges to name. This paper is intended to provide a review of the recent advancements in offline signature verification with a discussion on different types of forgeries, the features that have been investigated for this problem and the classifiers employed. The pros and cons of notable recent contributions to this problem have also been presented along with a discussion of potential future research directions on this subject.",
"title": ""
},
{
"docid": "3867ff9ac24349b17e50ec2a34e84da4",
"text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.",
"title": ""
},
{
"docid": "e875d4a88e73984e37f5ce9ffe543791",
"text": "A set of face stimuli called the NimStim Set of Facial Expressions is described. The goal in creating this set was to provide facial expressions that untrained individuals, characteristic of research participants, would recognize. This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.",
"title": ""
},
{
"docid": "be69820b8b0f80c9bb9c56d4652645da",
"text": "Intel Software Guard Extensions (SGX) is an emerging trusted hardware technology. SGX enables user-level code to allocate regions of trusted memory, called enclaves, where the confidentiality and integrity of code and data are guaranteed. While SGX offers strong security for applications, one limitation of SGX is the lack of system call support inside enclaves, which leads to a non-trivial, refactoring effort when protecting existing applications with SGX. To address this issue, previous works have ported existing library OSes to SGX. However, these library OSes are suboptimal in terms of security and performance since they are designed without taking into account the characteristics of SGX.\n In this paper, we revisit the library OS approach in a new setting---Intel SGX. We first quantitatively evaluate the performance impact of enclave transitions on SGX programs, identifying it as a performance bottleneck for any library OSes that aim to support system-intensive SGX applications. We then present the design and implementation of SGXKernel, an in-enclave library OS, with highlight on its switchless design, which obviates the needs for enclave transitions. This switchless design is achieved by incorporating two novel ideas: asynchronous cross-enclave communication and preemptible in-enclave multi-threading. We intensively evaluate the performance of SGXKernel on microbenchmarks and application benchmarks. The results show that SGXKernel significantly outperforms a state-of-the-art library OS that has been ported to SGX.",
"title": ""
},
{
"docid": "87eb69d6404bf42612806a5e6d67e7bb",
"text": "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.",
"title": ""
},
{
"docid": "1e347f69d739577d4bb0cc050d87eb5b",
"text": "The rapidly growing paradigm of the Internet of Things (IoT) requires new search engines, which can crawl heterogeneous data sources and search in highly dynamic contexts. Existing search engines cannot meet these requirements as they are designed for traditional Web and human users only. This is contrary to the fact that things are emerging as major producers and consumers of information. Currently, there is very little work on searching IoT and a number of works claim the unavailability of public IoT data. However, it is dismissed that a majority of real-time web-based maps are sharing data that is generated by things, directly. To shed light on this line of research, in this paper, we firstly create a set of tools to capture IoT data from a set of given data sources. We then create two types of interfaces to provide real-time searching services on dynamic IoT data for both human and machine users.",
"title": ""
},
{
"docid": "ec48c81a61954e9c6f262c508b3cdaa7",
"text": "Why should policymakers and practitioners care about the scholarly study of international affairs? Those who conduct foreign policy often dismiss academic theorists (frequently, one must admit, with good reason), but there is an inescapable link between the abstract world of theory and the real world of policy. We need theories to make sense of the blizzard of information that bombards us daily. Even policymakers who are contemptuous of \"theory\" must rely on their own (often unstated) ideas about how the world works in order to decide what to do. It is hard to make good policy if one's basic organizing principles are flawed, just as it is hard to construct good theories without knowing a lot about the real world. Everyone uses theories-whether he or she knows it or not-and disagreements about policy usually rest on more fundamental disagreements about the basic forces that shape international outcomes.",
"title": ""
},
{
"docid": "f01233a2f3ad749704649ead44e60cba",
"text": "The species of the pseudophyllidean genus Bothriocephalus Rudolphi, 1808 parasitising freshwater fishes in America are revised, based on the examination of type and voucher specimens of seven taxa. There are five valid species: Bothriocephalus claviceps (Goeze, 1782), B. cuspidatus Cooper, 1917, B. formosus Mueller & Van Cleave, 1932, B. acheilognathi Yamaguti, 1934, and B. pearsei Scholz, Vargas-Vázquez & Moravec, 1996. B. texomensis Self, 1954 from Hiodon alosoides in the USA, and B. musculosus Baer, 1937 from a cichlid Cichlasoma biocellatum (= C. octofasciatum) which died in an aquarium in Switzerland, are synonymised with B. cuspidatus. B. schilbeodis Cheng & James, 1960 from Schilbeodes insignis in the USA, B. speciosus (Leidy, 1858) Leidy, 1872 from Boleostoma olmstedi in the USA, and B. cestus Leidy, 1885 from Salvelinus sp. in Canada are considered to be species inquirendae until new material for the evaluation of their taxonomic status is available. B. cordiceps (Leidy, 1872) from Salmo (= Salvelinus) fontinalis in North America is in fact a larva (plerocercoid) of a Diphyllobothrium species. The study showed that there have been many misidentifications, mostly of B. cuspidatus erroneously designated as B. formosus or B. claviceps. The five valid species are redescribed and illustrated, with emphasis on scolex morphology. The distribution of individual taxa and the spectrum of their definitive hosts are briefly reviewed and a key facilitating identification of individual species is also provided.",
"title": ""
},
{
"docid": "1c83ce2568af5cc3679b69282b25c35d",
"text": "A useful ability for search engines is to be able to rank objects with novelty and diversity: the top k documents retrieved should cover possible intents of a query with some distribution, or should contain a diverse set of subtopics related to the user’s information need, or contain nuggets of information with little redundancy. Evaluation measures have been introduced to measure the effectiveness of systems at this task, but these measures have worst-case NP-hard computation time. The primary consequence of this is that there is no ranking principle akin to the Probability Ranking Principle for document relevance that provides uniform instruction on how to rank documents for novelty and diversity. We use simulation to investigate the practical implications of this for optimization and evaluation of retrieval systems.",
"title": ""
},
{
"docid": "30b508c7b576c88705098ac18657664b",
"text": "The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.",
"title": ""
},
{
"docid": "60abc52c4953a01d7964b63dde2d8935",
"text": "This article proposes a security authentication process that is well-suited for Vehicular Ad-hoc Networks (VANET). As compared to current Public Key Infrastructure (PKI) proposals for VANET authentication, the scheme is significantly more efficient with regard to bandwidth and computation. The scheme uses time as the creator of asymmetric knowledge. A sender creates a long chain of keys. Each key is used for only a short period of time to sign messages. When a key expires, it is publicly revealed, and then never again used. (The sender subsequently uses the next key in its chain to sign future messages.) Upon receiving a revealed key, recipients authenticate previously received messages. The root of a sender’s keychain is given in a certificate signed by an authority. This article describes several possible certificate exchange methods. It also addresses privacy issues in VANET, specifically the tension between anonymity and the ability to revoke certificates.",
"title": ""
},
{
"docid": "df97dff1e2539f192478f2aa91f69cc4",
"text": "Computer systems are increasingly employed in circumstances where their failure (or even their correct operation, if they are built to flawed requirements) can have serious consequences. There is a surprising diversity of opinion concerning the properties that such “critical systems” should possess, and the best methods to develop them. The dependability approach grew out of the tradition of ultra-reliable and fault-tolerant systems, while the safety approach grew out of the tradition of hazard analysis and system safety engineering. Yet another tradition is found in the security community, and there are further specialized approaches in the tradition of real-time systems. In this report, I examine the critical properties considered in each approach, and the techniques that have been developed to specify them and to ensure their satisfaction. Since systems are now being constructed that must satisfy several of these critical system properties simultaneously, there is particular interest in the extent to which techniques from one tradition support or conflict with those of another, and in whether certain critical system properties are fundamentally compatible or incompatible with each other. As a step toward improved understanding of these issues, I suggest a taxonomy, based on Perrow’s analysis, that considers the complexity of component interactions and tightness of coupling as primary factors. C. Perrow. Normal Accidents: Living with High Risk Technologies. Basic Books, New York, NY, 1984.",
"title": ""
},
{
"docid": "ddae1c6469769c2c7e683bfbc223ad1a",
"text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"title": ""
}
] |
scidocsrr
|
fbfb321cc8756a45e38e96ce21c59cc1
|
Very Deep Convolutional Neural Networks for Noise Robust Speech Recognition
|
[
{
"docid": "75e46bf5c1bcf73a9918026b0a4ad4f0",
"text": "Recently, the hybrid deep neural network (DNN)- hidden Markov model (HMM) has been shown to significantly improve speech recognition performance over the conventional Gaussian mixture model (GMM)-HMM. The performance improvement is partially attributed to the ability of the DNN to model complex correlations in speech features. In this paper, we show that further error rate reduction can be obtained by using convolutional neural networks (CNNs). We first present a concise description of the basic CNN and explain how it can be used for speech recognition. We further propose a limited-weight-sharing scheme that can better model speech features. The special structure such as local connectivity, weight sharing, and pooling in CNNs exhibits some degree of invariance to small shifts of speech features along the frequency axis, which is important to deal with speaker and environment variations. Experimental results show that CNNs reduce the error rate by 6%-10% compared with DNNs on the TIMIT phone recognition and the voice search large vocabulary speech recognition tasks.",
"title": ""
},
{
"docid": "2b095980aaccd7d35d079260738279c5",
"text": "Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance when embedded in large vocabulary continuous speech recognition (LVCSR) systems due to its capability of modeling local correlations and reducing translational variations. In all previous related works for ASR, only up to two convolutional layers are employed. In light of the recent success of very deep CNNs in image classification, it is of interest to investigate the deep structure of CNNs for speech recognition in detail. In contrast to image classification, the dimensionality of the speech feature, the span size of input feature and the relationship between temporal and spectral domain are new factors to consider while designing very deep CNNs. In this work, very deep CNNs are introduced for LVCSR task, by extending depth of convolutional layers up to ten. The contribution of this work is two-fold: performance improvement of very deep CNNs is investigated under different configurations; further, a better way to perform convolution operations on temporal dimension is proposed. Experiments showed that very deep CNNs offer a 8-12% relative improvement over baseline DNN system, and a 4-7% relative improvement over baseline CNN system, evaluated on both a 15-hr Callhome and a 51-hr Switchboard LVCSR tasks.",
"title": ""
},
{
"docid": "6cb246cadd7df12543d23a83d42d87a1",
"text": "New waves of consumer-centric applications, such as voice search and voice interaction with mobile devices and home entertainment systems, increasingly require automatic speech recognition (ASR) to be robust to the full range of real-world noise and other acoustic distorting conditions. Despite its practical importance, however, the inherent links between and distinctions among the myriad of methods for noise-robust ASR have yet to be carefully studied in order to advance the field further. To this end, it is critical to establish a solid, consistent, and common mathematical foundation for noise-robust ASR, which is lacking at present. This article is intended to fill this gap and to provide a thorough overview of modern noise-robust techniques for ASR developed over the past 30 years. We emphasize methods that are proven to be successful and that are likely to sustain or expand their future applicability. We distill key insights from our comprehensive overview in this field and take a fresh look at a few old problems, which nevertheless are still highly relevant today. Specifically, we have analyzed and categorized a wide range of noise-robust techniques using five different criteria: 1) feature-domain vs. model-domain processing, 2) the use of prior knowledge about the acoustic environment distortion, 3) the use of explicit environment-distortion models, 4) deterministic vs. uncertainty processing, and 5) the use of acoustic models trained jointly with the same feature enhancement or model adaptation process used in the testing stage. With this taxonomy-oriented review, we equip the reader with the insight to choose among techniques and with the awareness of the performance-complexity tradeoffs. The pros and cons of using different noise-robust ASR techniques in practical application scenarios are provided as a guide to interested practitioners. The current challenges and future research directions in this field is also carefully analyzed.",
"title": ""
},
{
"docid": "45bc81322a880cc633e9ab56bcd5fe2a",
"text": "Convolutional Neural Networks (CNN) have showed success in achieving translation invariance for many image processing tasks. The success is largely attributed to the use of local filtering and max-pooling in the CNN architecture. In this paper, we propose to apply CNN to speech recognition within the framework of hybrid NN-HMM model. We propose to use local filtering and max-pooling in frequency domain to normalize speaker variance to achieve higher multi-speaker speech recognition performance. In our method, a pair of local filtering layer and max-pooling layer is added at the lowest end of neural network (NN) to normalize spectral variations of speech signals. In our experiments, the proposed CNN architecture is evaluated in a speaker independent speech recognition task using the standard TIMIT data sets. Experimental results show that the proposed CNN method can achieve over 10% relative error reduction in the core TIMIT test sets when comparing with a regular NN using the same number of hidden layers and weights. Our results also show that the best result of the proposed CNN model is better than previously published results on the same TIMIT test sets that use a pre-trained deep NN model.",
"title": ""
}
] |
[
{
"docid": "efe2b236907a1ebdf013acebc03e26d2",
"text": "Studying fundamental computer architecture and organization topics requires a significant amount of practical work if students are to acquire a good grasp of the theoretical concepts presented in classroom lectures or textbooks. The use of simulators is commonly adopted in order to reach this objective. However, as most of the available educational simulators focus on specific topics, different laboratory assignments usually require the use of different simulators. This paper presents a graphical and interactive reduced instruction set computer (RISC) processor and memory simulator that allows active learning of some theoretical concepts covered in computer architecture and organization courses. The simulator can be configured to present different processor views, from a simple serial one, without caches or pipelines, to a more realistic one with caches and superscalar execution. This approach allows a set of increasingly complex code-based laboratory assignments to be developed using a single simulator, covering topics ranging from assembly language programming to the analysis of the different kind of cache misses, pipeline hazards or branch prediction hits and misses produced during a program execution. The simulator has been included in a an automatic assessment system that helps the students to complete the assignments and helps teachers to evaluate the correctness of the students' solutions in different environments, such as high-enrollment courses or distance education. Since 1996, both the simulator and the automatic assessment system have been successfully used by more than 5000 students in computer architecture and organization courses at the Technical University of Madrid (UPM), Spain.",
"title": ""
},
{
"docid": "aaf9884ef7f4611279f30ce01f84e48c",
"text": "Nowadays, patients have a wealth of information available on the Internet. Despite the potential benefits of Internet health information seeking, several concerns have been raised about the quality of information and about the patient's capability to evaluate medical information and to relate it to their own disease and treatment. As such, novel tools are required to effectively guide patients and provide high-quality medical information in an intelligent and personalised manner. With this aim, this paper presents the Personal Health Information Recommender (PHIR), a system to empower patients by enabling them to search in a high-quality document repository selected by experts, avoiding the information overload of the Internet. In addition, the information provided to the patients is personalised, based on individual preferences, medical conditions and other profiling information. Despite the generality of our approach, we apply the PHIR to a personal health record system constructed for cancer patients and we report on the design, the implementation and a preliminary validation of the platform. To the best of our knowledge, our platform is the only one combining natural language processing, ontologies and personal information to offer a unique user experience.",
"title": ""
},
{
"docid": "4964e9f7bdaf2bcf0cb7ae28899c3c11",
"text": "Considering the traffic safety in the scenario of arterial road with on-ramp, this study proposes a time-to-collision (TTC) based vehicular collision warning algorithm under connected environment. In particular, the information of vehicles of interest, i.e., position, traveling direction and velocity, is assumed to be collected by the roadside device via the vehicle-to-infrastructure (V2I) communications. Then, the TTC of a pair of vehicles in arterial road and on-ramp is estimated based on the position, traveling direction and velocity difference of that pair of vehicles. Consequently, the TTC warning messages can be disseminated to vehicles within the communication range of the roadside device, so as to reduce the risk of collision. The proposed algorithm can be used in the cooperative vehicle infrastructure systems (CVIS) to improve the traffic safety.",
"title": ""
},
{
"docid": "453f381177097be0ec43b44688454472",
"text": "Dendritic spines of pyramidal neurons in the cerebral cortex undergo activity-dependent structural remodelling that has been proposed to be a cellular basis of learning and memory. How structural remodelling supports synaptic plasticity, such as long-term potentiation, and whether such plasticity is input-specific at the level of the individual spine has remained unknown. We investigated the structural basis of long-term potentiation using two-photon photolysis of caged glutamate at single spines of hippocampal CA1 pyramidal neurons. Here we show that repetitive quantum-like photorelease (uncaging) of glutamate induces a rapid and selective enlargement of stimulated spines that is transient in large mushroom spines but persistent in small spines. Spine enlargement is associated with an increase in AMPA-receptor-mediated currents at the stimulated synapse and is dependent on NMDA receptors, calmodulin and actin polymerization. Long-lasting spine enlargement also requires Ca2+/calmodulin-dependent protein kinase II. Our results thus indicate that spines individually follow Hebb's postulate for learning. They further suggest that small spines are preferential sites for long-term potentiation induction, whereas large spines might represent physical traces of long-term memory.",
"title": ""
},
{
"docid": "070a1c6b47a0a5c217e747cd7e0e0d0b",
"text": "In this paper we develop a computational model of visual adaptation for realistic image synthesis based on psychophysical experiments. The model captures the changes in threshold visibility, color appearance, visual acuity, and sensitivity over time that are caused by the visual system’s adaptation mechanisms. We use the model to display the results of global illumination simulations illuminated at intensities ranging from daylight down to starlight. The resulting images better capture the visual characteristics of scenes viewed over a wide range of illumination levels. Because the model is based on psychophysical data it can be used to predict the visibility and appearance of scene features. This allows the model to be used as the basis of perceptually-based error metrics for limiting the precision of global illumination computations. CR",
"title": ""
},
{
"docid": "2601ff3b4af85883017d8fb7e28e5faa",
"text": "The heterogeneous nature of the applications, technologies and equipment that today's networks have to support has made the management of such infrastructures a complex task. The Software-Defined Networking (SDN) paradigm has emerged as a promising solution to reduce this complexity through the creation of a unified control plane independent of specific vendor equipment. However, designing a SDN-based solution for network resource management raises several challenges as it should exhibit flexibility, scalability and adaptability. In this paper, we present a new SDN-based management and control framework for fixed backbone networks, which provides support for both static and dynamic resource management applications. The framework consists of three layers which interact with each other through a set of interfaces. We develop a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer. We then show how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.",
"title": ""
},
{
"docid": "4d2c5785e60fa80febb176165622fca7",
"text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.",
"title": ""
},
{
"docid": "7dfbb5e01383b5f50dbeb87d55ceb719",
"text": "In recent years, a number of network forensics techniques have been proposed to investigate the increasing number of cybercrimes. Network forensics techniques assist in tracking internal and external network attacks by focusing on inherent network vulnerabilities and communication mechanisms. However, investigation of cybercrime becomes more challenging when cyber criminals erase the traces in order to avoid detection. Therefore, network forensics techniques employ mechanisms to facilitate investigation by recording every single packet and event that is disseminated into the network. As a result, it allows identification of the origin of the attack through reconstruction of the recorded data. In the current literature, network forensics techniques are studied on the basis of forensic tools, process models and framework implementations. However, a comprehensive study of cybercrime investigation using network forensics frameworks along with a critical review of present network forensics techniques is lacking. In other words, our study is motivated by the diversity of digital evidence and the difficulty of addressing numerous attacks in the network using network forensics techniques. Therefore, this paper reviews the fundamental mechanism of network forensics techniques to determine how network attacks are identified in the network. Through an extensive review of related literature, a thematic taxonomy is proposed for the classification of current network forensics techniques based on its implementation as well as target data sets involved in the conducting of forensic investigations. The critical aspects and significant features of the current network forensics techniques are investigated using qualitative analysis technique. We derive significant parameters from the literature for discussing the similarities and differences in existing network forensics techniques. The parameters include framework nature, mechanism, target dataset, target instance, forensic processing, time of investigation, execution definition, and objective function. Finally, open research challenges are discussed in network forensics to assist researchers in selecting the appropriate domains for further research and obtain ideas for exploring optimal techniques for investigating cyber-crimes. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d0d02fc3ca58d6dbeb6e3dc21a9136a8",
"text": "Breast cancer represents the second important cause of cancer deaths in women today and it is the most common type of cancer in women. Disease diagnosis is one of the applications where data mining tools are proving successful results. Data mining with decision trees is popular and effective data mining classification approach. Decision trees have the ability to generate understandable classification rules, which are very efficient tool for transfer knowledge to physicians and medical specialists. In fundamental truth, they provide trails to find rules that could be evaluated for separating the input samples into one of several groups without having to state the functional relationship directly. The objective of this paper is to examine the performance of recent invented decision tree modeling algorithms and compared with one that achieved by radial basis function kernel support vector machine (RBF-SVM) on the diagnosis of breast cancer using cytological proven tumor dataset. Four models have been evaluated in decision tree: Chi-squared Automatic Interaction Detection (CHAID), Classification and Regression tree (C&R), Quick Unbiased Efficient Statistical Tree (QUEST), and Ross Quinlan new decision tree model C5. 0. The objective is to classify a tumor as either benign or malignant based on cell descriptions compound by microscopic examination using decision tree models. The proposed algorithm imputes the missing values with C&R tree. Then, the performances of the five models are measured by three statistical measures; classification",
"title": ""
},
{
"docid": "0dfd9fdc0fdaa5ccd2cbdd94833fade3",
"text": "There have been serious concerns recently about the security of microchips from hardware trojan horse insertion during manufacturing. This issue has been raised recently due to outsourcing of the chip manufacturing processes to reduce cost. This is an important consideration especially in critical applications such as avionics, communications, military, industrial and so on. A trojan is inserted into a main circuit at manufacturing and is mostly inactive unless it is triggered by a rare value or time event; then it produces a payload error in the circuit, potentially catastrophic. Because of its nature, a trojan may not be easily detected by functional or ATPG testing. The problem of trojan detection has been addressed only recently in very few works. Our work analyzes and formulates the trojan detection problem based on a frequency analysis under rare trigger values and provides procedures to generate input trigger vectors and trojan test vectors to detect trojan effects. We also provide experimental results.",
"title": ""
},
{
"docid": "0dde4746ba5e3c33fbe88b93f6d01f8d",
"text": "In this paper, we study the application of Extreme Learning Machine (ELM) algorithm for single layered feedforward neural networks to non-linear chaotic time series problems. In this algorithm the input weights and the hidden layer bias are randomly chosen. The ELM formulation leads to solving a system of linear equations in terms of the unknown weights connecting the hidden layer to the output layer. The solution of this general system of linear equations will be obtained using Moore-Penrose generalized pseudo inverse. For the study of the application of the method we consider the time series generated by the Mackey Glass delay differential equation with different time delays, Santa Fe A and UCR heart beat rate ECG time series. For the choice of sigmoid, sin and hardlim activation functions the optimal values for the memory order and the number of hidden neurons which give the best prediction performance in terms of root mean square error are determined. It is observed that the results obtained are in close agreement with the exact solution of the problems considered which clearly shows that ELM is a very promising alternative method for time series prediction. Keywords—Chaotic time series, Extreme learning machine, Generalization performance.",
"title": ""
},
{
"docid": "4e7443088eedf5e6199959a06ebc420c",
"text": "The development of computational-intelligence based strategies for electronic markets has been the focus of intense research. In order to be able to design efficient and effective automated trading strategies, one first needs to understand the workings of the market, the strategies that traders use and their interactions as well as the patterns emerging as a result of these interactions. In this paper, we develop an agent-based model of the FX market which is the market for the buying and selling of currencies. Our agent-based model of the FX market (ABFXM) comprises heterogeneous trading agents which employ a strategy that identifies and responds to periodic patterns in the price time series. We use the ABFXM to undertake a systematic exploration of its constituent elements and their impact on the stylized facts (statistical patterns) of transactions data. This enables us to identify a set of sufficient conditions which result in the emergence of the stylized facts similarly to the real market data, and formulate a model which closely approximates the stylized facts. We use a unique high frequency dataset of historical transactions data which enables us to run multiple simulation runs and validate our approach and draw comparisons and conclusions for each market setting.",
"title": ""
},
{
"docid": "257f9a2c47c7694d1915ca8a45d11a55",
"text": "Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade was captured by the network layer representations, where the increasingly abstract stimulus representation in the hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out validation set of viewed objects, achieving state-of-the-art decoding accuracy.",
"title": ""
},
{
"docid": "2cda7920287ead31dfc3c2067030bb70",
"text": "We propose a Genetic Programming architecture for the generation of foreign exchange trading strategies. The system’s principal features are the evolution of free-form strategies which do not rely on any prior models and the utilization of price series from multiple instruments as input data. This latter feature constitutes an innovation with respect to previous works documented in literature. In this article we utilize Open, High, Low, Close bar data at a 5 minutes frequency for the AUD.USD, EUR.USD, GBP.USD and USD.JPY currency pairs. We will test the implementation analyzing the in-sample and out-of-sample performance of strategies for trading the USD.JPY obtained across multiple algorithm runs. We will also evaluate the differences between strategies selected according to two different criteria: one relies on the fitness obtained on the training set only, the second one makes use of an additional validation dataset. Strategy activity and trade accuracy are remarkably stable between in and out of sample results. From a profitability aspect, the two criteria both result in strategies successful on out-of-sample data but exhibiting different characteristics. The overall best performing out-of-sample strategy achieves a yearly return of 19%.",
"title": ""
},
{
"docid": "5935224c53222d0234adffddae23eb04",
"text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.",
"title": ""
},
{
"docid": "b2171911e8c45ebc86585e0a179718c3",
"text": "Robots are envisioned to collaborate with people in tasks that require physical manipulation such as a robot instructing a human in assembling household furniture, a human teaching a robot how to repair machinery, or a robot and a human collaboratively completing construction work. These scenarios characterize joint actions in which the robot and the human must effectively communicate and coordinate their actions with each other in order to successfully achieve task goals. Drawing on recent research in cognitive sciences on joint action, this paper discusses key mechanisms for effective coordination—joint attention, action observation, task-sharing, action coordination, and perception of agency—toward informing the design of communication and coordination mechanisms for robots. It presents two illustrative studies that explore how robot behavior might be designed to employ these mechanisms, particularly joint attention and action observation, to improve measures of task performance and perceptions of the robot in human-robot collaboration.",
"title": ""
},
{
"docid": "b8c92f2be87e0e7bb270a966f829d561",
"text": "In order to enhance the instantaneity of SLAM for indoor mobile robot, a RGBD SLAM method based on Kinect was proposed. In the method, oriented FAST and rotated BRIEF(ORB) algorithm was combined with progressive sample consensus(PROSAC) algorithm to execute feature extracting and matching. More specifically, ORB algorithm which has better property than many other feature descriptors was used for extracting feature. At the same time, ICP algorithm was adopted for coarse registration of the point clouds, and PROSAC algorithm which is superior than RANSAC in outlier removal was employed to eliminate incorrect matching. To make the result more accurate, pose-graph optimization was achieved based on g2o framework. In the end, a 3D volumetric map which can be directly used to the navigation of robots was created.",
"title": ""
},
{
"docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2",
"text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"title": ""
},
{
"docid": "7f5ffe7ff4ecdfec38250b3e41a621c8",
"text": "Flash memory has gained tremendous popularity in recent years. A variant of flash, referred to as NAND flash, is widely used in consumer electronics products, such as cell-phones and music players while NAND flash based Solid-State Disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. Computer architects have recently begun exploring the use NAND flash, from SSD organizations to disk caches and even new flash-based server architectures. In order to study this design space, architects require simulation tools that can provide detailed insights into the behavior of flash memory. This thesis presents two such tools that model two important characteristics of NAND flash: power consumption and endurance. The first tool, called FlashPower, is a microarchitecture level modeling tool that provides a detailed analytical power model for Single-Level Cell (SLC) based NAND flash memory. FlashPower estimates the power consumed by a NAND flash memory chip during its various operating modes. We have validated FlashPower against published chip power measurements and show that they are comparable. Results from a design space exploration using FlashPower indicate that the values of bits being read or written into NAND flash memory have a significant impact on energy dissipation. The second tool, called Flash EnduraNCE (FENCE), models the endurance characteristics of NAND flash and captures the impact of stress and recovery on NAND flash memory cells. Using FENCE, we show that the recovery process, which prior studies on flash based SSDs have not considered, significantly boosts endurance. Using a set of real enterprise workloads, we show that this recovery process allows for orders of magnitude higher endurance than those given in datasheets. Our results indicate that, under realistic usage conditions, SSDs that use standard wear-",
"title": ""
},
{
"docid": "11355807aa6b24f2eade366f391f0338",
"text": "Object detectors have hugely profited from moving towards an end-to-end learning paradigm: proposals, fea tures, and the classifier becoming one neural network improved results two-fold on general object detection. One indispensable component is non-maximum suppression (NMS), a post-processing algorithm responsible for merging all detections that belong to the same object. The de facto standard NMS algorithm is still fully hand-crafted, suspiciously simple, and — being based on greedy clustering with a fixed distance threshold — forces a trade-off between recall and precision. We propose a new network architecture designed to perform NMS, using only boxes and their score. We report experiments for person detection on PETS and for general object categories on the COCO dataset. Our approach shows promise providing improved localization and occlusion handling.",
"title": ""
}
] |
scidocsrr
|
6329d680a97a08757e758a5e08b51e36
|
FreeCam: A Hybrid Camera System for Interactive Free-Viewpoint Video
|
[
{
"docid": "b7ee47b961eeba5fa4dd28ce56ab47ee",
"text": "Virtual view synthesis from an array of cameras has been an essential element of three-dimensional video broadcasting/conferencing. In this paper, we propose a scheme based on a hybrid camera array consisting of four regular video cameras and one time-of-flight depth camera. During rendering, we use the depth image from the depth camera as initialization, and compute a view-dependent scene geometry using constrained plane sweeping from the regular cameras. View-dependent texture mapping is then deployed to render the scene at the desired virtual viewpoint. Experimental results show that the addition of the time-of-flight depth camera greatly improves the rendering quality compared with an array of regular cameras with similar sparsity. In the application of 3D video boardcasting/conferencing, our hybrid camera system demonstrates great potential in reducing the amount of data for compression/streaming while maintaining high rendering quality.",
"title": ""
}
] |
[
{
"docid": "f43c4d3eba766a5ad9c84f2cc29c2de7",
"text": "This paper presents an overview of 5 meta-analyses of early intensive behavioral intervention (EIBI) for young children with autism spectrum disorders (ASDs) published in 2009 and 2010. There were many differences between meta-analyses, leading to different estimates of effect and overall conclusions. The weighted mean effect sizes across meta-analyses for IQ and adaptive behavior ranged from g = .38-1.19 and g = .30-1.09, respectively. Four of five meta-analyses concluded EIBI was an effective intervention strategy for many children with ASDs. A discussion highlighting potential confounds and limitations of the meta-analyses leading to these discrepancies and conclusions about the efficacy of EIBI as an intervention for young children with ASDs are provided.",
"title": ""
},
{
"docid": "83393c9a0392249409a057914c71b1a0",
"text": "Recent achievement of the learning-based classification leads to the noticeable performance improvement in automatic polyp detection. Here, building large good datasets is very crucial for learning a reliable detector. However, it is practically challenging due to the diversity of polyp types, expensive inspection, and labor-intensive labeling tasks. For this reason, the polyp datasets usually tend to be imbalanced, i.e., the number of non-polyp samples is much larger than that of polyp samples, and learning with those imbalanced datasets results in a detector biased toward a non-polyp class. In this paper, we propose a data sampling-based boosting framework to learn an unbiased polyp detector from the imbalanced datasets. In our learning scheme, we learn multiple weak classifiers with the datasets rebalanced by up/down sampling, and generate a polyp detector by combining them. In addition, for enhancing discriminability between polyps and non-polyps that have similar appearances, we propose an effective feature learning method using partial least square analysis, and use it for learning compact and discriminative features. Experimental results using challenging datasets show obvious performance improvement over other detectors. We further prove effectiveness and usefulness of the proposed methods with extensive evaluation.",
"title": ""
},
{
"docid": "57acf9cf70717233daa2a204f9fe1e66",
"text": "Pedestrian Collision Mitigation Systems (PCMS) are in the market for a few years. Due to continuously evolving Euro NCAP regulations their presence will rapidly increase. Visual sensors, already capable of pedestrian classification, provide functional benefits because system responses can be better adapted to expected pedestrian's behavior. Nevertheless their performance will suffer under adverse environmental conditions like darkness, fog, rain or backlight. Even in such situations the performance of radar sensors is not significantly deteriorated. Enabling classification capability for radar-based systems will increase road safety further and will lower PCMS's overall costs. In this paper a multi-reflection-point pedestrian target model based on motion analysis is presented. Together with an appropriate sensor model, pedestrian radar signal responses can be provided for a wide range of relevant accident scenarios, without risk for the health of test persons. Besides determination of human classification features, the model provides identification of the limits in classical radar signal processing. Beyond these borderlines it offers the opportunity to evaluate parametric spectral analysis methods.",
"title": ""
},
{
"docid": "4fae776e81148182866f53aa65788dde",
"text": "OBJECTIVE\nThe Health Assessment Questionnaire - Disability Index (HAQ), used as a disability and outcome measurement in rheumatoid arthritis (RA), has been validated in several languages, but not in Chinese. Our aim was to validate the Chinese version of HAQ (Chinese-HAQ) to suit the needs of Chinese speaking patients with RA in an Asian setting.\n\n\nMETHODS\nThe original HAQ was modified in the context of Chinese culture and translated into Chinese by 2 translators aware of the objective of the questionnaire. The Chinese HAQ was self-administered by 42 patients with RA during their routine followup visit and one week later.\n\n\nRESULTS\nThe test-retest reliability assessed using Spearman's correlation coefficient was 0.84. Between dimensions measured in the HAQ, the highest test-retest reliability was observed for walking (Spearman correlation coefficient rs=0.80) and the lowest was for eating (rs=0.54). The internal consistency of the scale using Cronbach's alpha was high at 0.86. In terms of criterion validity, the Chinese-HAQ score was found to correlate well with American College of Rheumatology functional status (rs=0.501, p=0.01). The Chinese-HAQ scores also correlated well with markers of disease activity such as patient's perception of pain measured on a visual analog scale (rs=0.55, p < 0.001), grip strength in mm Hg (rs=-0.55. p < 0.001 ), and physician's assessment of disease activity (rs=0.59, p < 0.001).\n\n\nCONCLUSION\nThe Chinese HAQ is a reliable and valid instrument for studies measuring disability of patients with RA in Singapore.",
"title": ""
},
{
"docid": "5629a9cf39611bed79ce76e661dba2fe",
"text": "We investigate aspects of interoperability between a broad range of common annotation schemes for syntacto-semantic dependencies. With the practical goal of making the LinGO Redwoods Treebank accessible to broader usage, we contrast seven distinct annotation schemes of functor–argument structure, both in terms of syntactic and semantic relations. Drawing examples from a multi-annotated gold standard, we show how abstractly similar information can take quite different forms across frameworks. We further seek to shed light on the representational ‘distance’ between pure bilexical dependencies, on the one hand, and full-blown logical-form propositional semantics, on the other hand. Furthermore, we propose a fully automated conversion procedure from (logical-form) meaning representation to bilexical semantic dependencies.†",
"title": ""
},
{
"docid": "39c597ee9c9d9392e803aedeeeb28de9",
"text": "BACKGROUND\nApalutamide, a competitive inhibitor of the androgen receptor, is under development for the treatment of prostate cancer. We evaluated the efficacy of apalutamide in men with nonmetastatic castration-resistant prostate cancer who were at high risk for the development of metastasis.\n\n\nMETHODS\nWe conducted a double-blind, placebo-controlled, phase 3 trial involving men with nonmetastatic castration-resistant prostate cancer and a prostate-specific antigen doubling time of 10 months or less. Patients were randomly assigned, in a 2:1 ratio, to receive apalutamide (240 mg per day) or placebo. All the patients continued to receive androgen-deprivation therapy. The primary end point was metastasis-free survival, which was defined as the time from randomization to the first detection of distant metastasis on imaging or death.\n\n\nRESULTS\nA total of 1207 men underwent randomization (806 to the apalutamide group and 401 to the placebo group). In the planned primary analysis, which was performed after 378 events had occurred, median metastasis-free survival was 40.5 months in the apalutamide group as compared with 16.2 months in the placebo group (hazard ratio for metastasis or death, 0.28; 95% confidence interval [CI], 0.23 to 0.35; P<0.001). Time to symptomatic progression was significantly longer with apalutamide than with placebo (hazard ratio, 0.45; 95% CI, 0.32 to 0.63; P<0.001). The rate of adverse events leading to discontinuation of the trial regimen was 10.6% in the apalutamide group and 7.0% in the placebo group. The following adverse events occurred at a higher rate with apalutamide than with placebo: rash (23.8% vs. 5.5%), hypothyroidism (8.1% vs. 2.0%), and fracture (11.7% vs. 6.5%).\n\n\nCONCLUSIONS\nAmong men with nonmetastatic castration-resistant prostate cancer, metastasis-free survival and time to symptomatic progression were significantly longer with apalutamide than with placebo. (Funded by Janssen Research and Development; SPARTAN ClinicalTrials.gov number, NCT01946204 .).",
"title": ""
},
{
"docid": "d1fd4d535052a1c2418259c9b2abed66",
"text": "BACKGROUND\nSit-to-stand tests (STST) have recently been developed as easy-to-use field tests to evaluate exercise tolerance in COPD patients. As several modalities of the test exist, this review presents a synthesis of the advantages and limitations of these tools with the objective of helping health professionals to identify the STST modality most appropriate for their patients.\n\n\nMETHOD\nSeventeen original articles dealing with STST in COPD patients have been identified and analysed including eleven on 1min-STST and four other versions of the test (ranging from 5 to 10 repetitions and from 30 s to 3 min). In these studies the results obtained in sit-to-stand tests and the recorded physiological variables have been correlated with the results reported in other functional tests.\n\n\nRESULTS\nA good set of correlations was achieved between STST performances and the results reported in other functional tests, as well as quality of life scores and prognostic index. According to the different STST versions the processes involved in performance are different and consistent with more or less pronounced associations with various physical qualities. These tests are easy to use in a home environment, with excellent metrological properties and responsiveness to pulmonary rehabilitation, even though repetition of the same movement remains a fragmented and restrictive approach to overall physical evaluation.\n\n\nCONCLUSIONS\nThe STST appears to be a relevant and valid tool to assess functional status in COPD patients. While all versions of STST have been tested in COPD patients, they should not be considered as equivalent or interchangeable.",
"title": ""
},
{
"docid": "0b55af3bcc10ea30a340fcf257be00c1",
"text": "Magnetic Resonance Angiography (MRA) has become an essential MR contrast for imaging and evaluation of vascular anatomy and related diseases. MRA acquisitions are typically ordered for vascular interventions, whereas in typical scenarios, MRA sequences can be absent in the patient scans. This motivates the need for a technique that generates inexistent MRA from existing MR multi-contrast, which could be a valuable tool in retrospective subject evaluations and imaging studies. In this paper, we present a generative adversarial network (GAN) based technique to generate MRA from T1-weighted and T2-weighted MRI images, for the first time to our knowledge. To better model the representation of vessels which the MRA inherently highlights, we design a loss term dedicated to a faithful reproduction of vascularities. To that end, we incorporate steerable filter responses of the generated and reference images inside a Huber function loss term. Extending the wellestablished generator-discriminator architecture based on the recent PatchGAN model with the addition of steerable filter loss, the proposed steerable GAN (sGAN) method is evaluated on the large public database IXI. Experimental results show that the sGAN outperforms the baseline GAN method in terms of an overlap score with similar PSNR values, while it leads to improved visual perceptual quality.",
"title": ""
},
{
"docid": "473f80115b7fa9979d6d6ffa2995c721",
"text": "Context Olive oil, the main fat in the Mediterranean diet, contains polyphenols, which have antioxidant properties and may affect serum lipid levels. Contribution The authors studied virgin olive oil (high in polyphenols), refined olive oil (low in polyphenols), and a mixture of the 2 oils in equal parts. Two hundred healthy young men consumed 25 mL of an olive oil daily for 3 weeks followed by the other olive oils in a randomly assigned sequence. Olive oils with greater polyphenol content increased high-density lipoprotein (HDL) cholesterol levels and decreased serum markers of oxidation. Cautions The increase in HDL cholesterol level was small. Implications Virgin olive oil might have greater health benefits than refined olive oil. The Editors Polyphenol intake has been associated with low cancer and coronary heart disease (CHD) mortality rates (1). Antioxidant and anti-inflammatory properties and improvements in endothelial dysfunction and the lipid profile have been reported for dietary polyphenols (2). Studies have recently suggested that Mediterranean health benefits may be due to a synergistic combination of phytochemicals and fatty acids (3). Olive oil, rich in oleic acid (a monounsaturated fatty acid), is the main fat of the Mediterranean diet (4). To date, most of the protective effect of olive oil within the Mediterranean diet has been attributed to its high monounsaturated fatty acid content (5). However, if the effect of olive oil can be attributed solely to its monounsaturated fatty acid content, any type of olive oil, rapeseed or canola oil, or monounsaturated fatty acidenriched fat would provide similar health benefits. Whether the beneficial effects of olive oil on the cardiovascular system are exclusively due to oleic acid remains to be elucidated. The minor components, particularly the phenolic compounds, in olive oil may contribute to the health benefits derived from the Mediterranean diet. Among olive oils usually present on the market, virgin olive oils produced by direct-press or centrifugation methods have higher phenolic content (150 to 350 mg/kg of olive oil) (6). In experimental studies, phenolic compounds in olive oil showed strong antioxidant properties (7, 8). Oxidized low-density lipoprotein (LDL) is currently thought to be more damaging to the arterial wall than native LDL cholesterol (9). Results of randomized, crossover, controlled clinical trials on the antioxidant effect of polyphenols from real-life daily doses of olive oil in humans are, however, conflicting (10). Growing evidence suggests that dietary phenols (1115) and plant-based diets (16) can modulate lipid and lipoprotein metabolism. The Effect of Olive Oil on Oxidative Damage in European Populations (EUROLIVE) Study is a multicenter, randomized, crossover, clinical intervention trial that aims to assess the effect of sustained daily doses of olive oil, as a function of its phenolic content, on the oxidative damage to lipid and LDL cholesterol levels and the lipid profile as cardiovascular risk factors. Methods Participants We recruited healthy men, 20 to 60 years of age, from 6 European cities through newspaper and university advertisements. Of the 344 persons who agreed to be screened, 200 persons were eligible (32 men from Barcelona, Spain; 33 men from Copenhagen, Denmark; 30 men from Kuopio, Finland; 31 men from Bologna, Italy; 40 men from Postdam, Germany; and 34 men from Berlin, Germany) and were enrolled from September 2002 through June 2003 (Figure 1). Participants were eligible for study inclusion if they provided written informed consent, were willing to adhere to the protocol, and were in good health. We preselected volunteers when clinical record, physical examination, and blood pressure were strictly normal and the candidate was a nonsmoker. Next, we performed a complete blood count, biochemical laboratory analyses, and urinary dipstick tests to measure levels of serum glucose, total cholesterol, creatinine, alanine aminotransferase, and triglycerides. We included candidates with values within the reference range. Exclusion criteria were smoking; use of antioxidant supplements, aspirin, or drugs with established antioxidant properties; hyperlipidemia; obesity; diabetes; hypertension; intestinal disease; or any other disease or condition that would impair adherence. We excluded women to avoid the possible interference of estrogens, which are considered to be potential antioxidants (17). All participants provided written informed consent, and the local institutional ethics committees approved the protocol. Figure 1. Study flow diagram. Sequence of olive oil administration: 1) high-, medium-, and low-polyphenol olive oil; 2) medium-, low-, and high-polyphenol olive oil; and 3) low-, high-, and medium-polyphenol olive oil. Design and Study Procedure The trial was a randomized, crossover, controlled study. We randomly assigned participants consecutively to 1 of 3 sequences of olive oil administration. Participants received a daily dose of 25 mL (22 g) of 3 olive oils with high (366 mg/kg), medium (164 mg/kg), and low (2.7 mg/kg) polyphenol content (Figure 1) in replacement of other raw fats. Sequences were high-, medium-, and low-polyphenol olive oil (sequence 1); medium-, low-, and high-polyphenol olive oil (sequence 2); and low-, high-, and medium-polyphenol olive oil (sequence 3). In the coordinating center, we prepared random allocation to each sequence, taken from a Latin square, for each center by blocks of 42 participants (14 persons in each sequence), using specific software that was developed at the Municipal Institute for Medical Research, Barcelona, Spain (Aleator, Municipal Institute for Medical Research). The random allocation was faxed to the participating centers upon request for each individual included in the study. Treatment containers were assigned a code number that was concealed from participants and investigators, and the coordinating center disclosed the code number only after completion of statistical analyses. Olive oils were specially prepared for the trial. We selected a virgin olive oil with high natural phenolic content (366 mg/kg) and measured its fatty acid and vitamin E composition. We tested refined olive oil harvested from the same cultivar and soil to find an olive oil with similar quantities of fatty acid and a similar micronutrient profile. Vitamin E was adjusted to values similar to those of the selected virgin olive oil. Because phenolic compounds are lost in the refinement process, the refined olive oil had a low phenolic content (2.7 mg/kg). By mixing virgin and refined olive oil, we obtained an olive oil with an intermediate phenolic content (164 mg/kg). Olive oils did not differ in fat and micronutrient composition (that is, vitamin E, triterpenes, and sitosterols), with the exception of phenolic content. Three-week interventions were preceded by 2-week washout periods, in which we requested that participants avoid olive and olive oil consumption. We chose the 2-week washout period to reach the equilibrium in the plasma lipid profile because longer intervention periods with fat-rich diets did not modify the lipid concentrations (18). Daily doses of 25 mL of olive oil were blindly prepared in containers delivered to the participants at the beginning of each intervention period. We instructed participants to return the 21 containers at the end of each intervention period so that the daily amount of unconsumed olive oil could be registered. Dietary Adherence We measured tyrosol and hydroxytyrosol, the 2 major phenolic compounds in olive oil as simple forms or conjugates (7), by gas chromatography and mass spectrometry in 24-hour urine before and after each intervention period as biomarkers of adherence to the type of olive oil ingested. We asked participants to keep a 3-day dietary record at baseline and after each intervention period. We requested that participants in all centers avoid a high intake of foods that contain antioxidants (that is, vegetables, legumes, fruits, tea, coffee, chocolate, wine, and beer). A nutritionist also personally advised participants to replace all types of habitually consumed raw fats with the olive oils (for example, spread the assigned olive oil on bread instead of butter, put the assigned olive oil on boiled vegetables instead of margarine, and use the assigned olive oil on salads instead of other vegetable oils or standard salad dressings). Data Collection Main outcome measures were changes in biomarkers of oxidative damage to lipids. Secondary outcomes were changes in lipid levels and in biomarkers of the antioxidant status of the participants. We assessed outcome measures at the beginning of the study (baseline) and before (preintervention) and after (postintervention) each olive oil intervention period. We collected blood samples at fasting state together with 24-hour urine and recorded anthropometric variables. We measured blood pressure with a mercury sphygmomanometer after at least a 10-minute rest in the seated position. We recorded physical activity at baseline and at the end of the study and assessed it by using the Minnesota Leisure Time Physical Activity Questionnaire (19). We measured 1) glucose and lipid profile, including serum glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, and triglyceride levels determined by enzymatic methods (2023) and LDL cholesterol levels calculated by the Friedewald formula; 2) oxidative damage to lipids, including plasma-circulating oxidized LDL measured by enzyme immunoassay, plasma total F2-isoprostanes determined by using high-performance liquid chromatography and stable isotope-dilution and mass spectrometry, plasma C18 hydroxy fatty acids measured by gas chromatography and mass spectrometry, and serum LDL cholesterol uninduced conjugated dienes measured by spectrophotometry and adjusted for the cholesterol concentration in LDL cholesterol levels; 3) antioxidant sta",
"title": ""
},
{
"docid": "b9f0d40389d009aae00d73404cdd193e",
"text": "Machine learning applications are increasingly deployed not only to serve predictions using static models, but also as tightly-integrated components of feedback loops involving dynamic, real-time decision making. These applications pose a new set of requirements, none of which are difficult to achieve in isolation, but the combination of which creates a challenge for existing distributed execution frameworks: computation with millisecond latency at high throughput, adaptive construction of arbitrary task graphs, and execution of heterogeneous kernels over diverse sets of resources. We assert that a new distributed execution framework is needed for such ML applications and propose a candidate approach with a proof-of-concept architecture that achieves a 63x performance improvement over a state-of-the-art execution framework for a representative application.",
"title": ""
},
{
"docid": "a254a56cd7e66c399c371481d2a4ce27",
"text": "To protect Field-Programmable Gate Array (FPGA) designs against Intellectual Property (IP) theft and related issues such as product cloning, all major FPGA manufacturers offer a mechanism to encrypt the bitstream that is used to configure the FPGA. From a mathematical point of view, the employed encryption algorithms (e.g., Advanced Encryption Standard (AES) or 3DES) are highly secure. However, it has been shown that the bitstream encryption feature of several FPGA families is susceptible to side-channel attacks based on measuring the power consumption of the cryptographic module. In this article, we present the first successful attack on the bitstream encryption of the Altera Stratix II and Stratix III FPGA families. To this end, we analyzed the Quartus II software and reverse engineered the details of the proprietary and unpublished schemes used for bitstream encryption on Stratix II and Stratix III. Using this knowledge, we demonstrate that the full 128-bit AES key of a Stratix II as well as the full 256-bit AES key of a Stratix III can be recovered by means of side-channel attacks. In both cases, the attack can be conducted in a few hours. The complete bitstream of these FPGAs that are (seemingly) protected by the bitstream encryption feature can hence fall into the hands of a competitor or criminal—possibly implying system-wide damage if confidential information such as proprietary encryption schemes or secret keys programmed into the FPGA are extracted. In addition to lost IP, reprogramming the attacked FPGA with modified code, for instance, to secretly plant a hardware Trojan, is a particularly dangerous scenario for many security-critical applications.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "28079463796ae8882bba5c0ed35a2482",
"text": "Automated segmentation and analysis of tree-like structures from 3D medical images are important for many medical applications, such as those dealing with blood vasculature or lung airways. However, there is an absence of large databases of expert segmentations and analyses of such 3D medical images, which impedes the validation and training of proposed image analysis algorithms. In this work, we simulate volumetric images of vascular trees and generate the corresponding ground-truth segmentations, bifurcation locations, branch properties, and tree hierarchy. The tree generation is performed by iteratively growing a vascular structure based on a user-defined (possibly spatially varying) oxygen demand map. We describe the details of the algorithm and provide a variety of example results.",
"title": ""
},
{
"docid": "1345b60ea27bf047708c641cebe265f7",
"text": "A fruit recognition approach based on segmenting the point cloud acquired by a 3D camera into approximately convex surfaces is considered. A segmentation approach which transforms a depth image into a triangular mesh and then segments this mesh into approximately convex segments is applied to depth images of fruits on trees. An analysis of the results obtained by this approach is performed with the intention to determine how successful the studied method is in detecting fruit as separate objects in a point cloud. The reported analysis gives a valuable insight into the potential applicability of the tested methodology in the preprocessing stage of a fruit recognition system as well as its drawbacks. Keywords—fruit recognition;3D camera; convex sets; segmentation",
"title": ""
},
{
"docid": "589396a7c9dae0567f0bcd4d83461a6f",
"text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.",
"title": ""
},
{
"docid": "0b51b727f39a9c8ea6580794c6f1e2bb",
"text": "Many researchers proposed different methodologies for the text skew estimation in binary images/gray scale images. They have been used widely for the skew identification of the printed text. There exist so many ways algorithms for detecting and correcting a slant or skew in a given document or image. Some of them provide better accuracy but are slow in speed, others have angle limitation drawback. So a new technique for skew detection in the paper, will reduce the time and cost. Keywords— Document image processing, Skew detection, Nearest-neighbour approach, Moments, Hough transformation.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "cccb38dab9ead68b5c3bd88f03d75cb0",
"text": "e múltiplos episódios de sangramento de varizes esofágicas e gástricas, passou por um procedimento de TIPS para controlar a hemorragia gastroesophaneal refratária e como uma ponte para transplante de fígado. Na admissão, ele estava clinicamente estável e tinha estágio final da doença hepática pontuação de 13 e bilirrubina sérica total inicial de 3,7 mg/dl. O procedimento TIPS foi realizada através da veia jugular interna direita, usando a padronização9. O stent selecionado e disponível era de metal autoexpansível Wallstent stent 10 x 68 mm (Boston Scientific Corporation, MA, EUA), que foi devidamente implantado no fígado, criando um shunt entre a veia hepática direita e um dos ramos esquerdos da veia porta. O trajeto pós-stent foi dilatada com balão de 10 mm e venograma portal de controle demonstrou patência de shunt e não opacificação significativa da circulação colateral venosa. Houve redução da pressão venosa portal de 26-16 mm Hg, e do gradiente de pressão portosistêmico 19-9 mmHg. O procedimento transcorreu sem intercorrências e paciente permaneceu no hospital para observação. Três dias depois ele apresentou icterícia súbita sem quaisquer sinais de insuficiência hepática (encefalopatia) ou sepse (febre ou hipotensão). Neste momento, os exames mostraram bilurribina nível total de 41,6 mg/dl (bilirrubina direta de 28,1 mg/dl), a relação internacional de 1/2, fosfatase alcalina de 151 UI/l, alanina aminotransferase de 60 UI/l, de aspartato aminotransferase 104 UI/l, de creatinina de 1,0 mg/dl e contagem de leucócitos totais de 6,800/ml. Doppler do fígado mostrou stent adequado, permeabilidade e fluxo anterógrado, sem evidência de dilatação das vias biliares. Tomografia computadorizada e angiografia abdominais foram realizadas e não forneceram qualquer informação adicional. Uma semana depois, o paciente estava clinicamente inalterado, com exceção de icterícia piorada. Não havia nenhuma evidência de infecção, ou encefalopatia ou hemobilia. Apesar dos testes de laboratório não serem INTRODUÇÃO",
"title": ""
},
{
"docid": "bddf8420c2dd67dd5be10556088bf653",
"text": "The Hadoop Distributed File System (HDFS) is a distributed storage system that stores large-scale data sets reliably and streams those data sets to applications at high bandwidth. HDFS provides high performance, reliability and availability by replicating data, typically three copies of every data. The data in HDFS changes in popularity over time. To get better performance and higher disk utilization, the replication policy of HDFS should be elastic and adapt to data popularity. In this paper, we describe ERMS, an elastic replication management system for HDFS. ERMS provides an active/standby storage model for HDFS. It utilizes a complex event processing engine to distinguish real-time data types, and then dynamically increases extra replicas for hot data, cleans up these extra replicas when the data cool down, and uses erasure codes for cold data. ERMS also introduces a replica placement strategy for the extra replicas of hot data and erasure coding parities. The experiments show that ERMS effectively improves the reliability and performance of HDFS and reduce storage overhead.",
"title": ""
},
{
"docid": "1d3318884ffe201e50312b68bf51956a",
"text": "This paper explores alternate algorithms, reward functions and feature sets for performing multi-document summarization using reinforcement learning with a high focus on reproducibility. We show that ROUGE results can be improved using a unigram and bigram similarity metric when training a learner to select sentences for summarization. Learners are trained to summarize document clusters based on various algorithms and reward functions and then evaluated using ROUGE. Our experiments show a statistically significant improvement of 1.33%, 1.58%, and 2.25% for ROUGE-1, ROUGE-2 and ROUGEL scores, respectively, when compared with the performance of the state of the art in automatic summarization with reinforcement learning on the DUC2004 dataset. Furthermore query focused extensions of our approach show an improvement of 1.37% and 2.31% for ROUGE-2 and ROUGE-SU4 respectively over query focused extensions of the state of the art with reinforcement learning on the DUC2006 dataset.",
"title": ""
}
] |
scidocsrr
|
734bc3b53f135624c8fc359c707cbad3
|
Multiple Graph Label Propagation by Sparse Integration
|
[
{
"docid": "79414d5ba6a202bf52d26a74caff4784",
"text": "The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach.",
"title": ""
}
] |
[
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "89b73780755b1ee92babc7ce3933c05e",
"text": "Big Data analytics provide support for decision making by discovering patterns and other useful information from large set of data. Organizations utilizing advanced analytics techniques to gain real value from Big Data will grow faster than their competitors and seize new opportunities. Cross-Industry Standard Process for Data Mining (CRISP-DM) is an industry-proven way to build predictive analytics models across the enterprise. However, the manual process in CRISP-DM hinders faster decision making on real-time application for efficient data analysis. In this paper, we present an approach to automate the process using Automatic Service Composition (ASC). Focusing on the planning stage of ASC, we propose an ontology-based workflow generation method to automate the CRISP-DM process. Ontology and rules are designed to infer workflow for data analytics process according to the properties of the datasets as well as user needs. Empirical study of our prototyping system has proved the efficiency of our workflow generation method.",
"title": ""
},
{
"docid": "47d71c3063e365205a6450f4db777503",
"text": "This in vivo research investigated whether pulp treatments using formocresol for 7 days would cause mutagenic changes in children’s lymphocytes. The mutagenicity was tested in lymphocyte cultures established from the peripheral blood of children living in Brazil. The samples consisted of 2000 cells from teeth undergoing formocresol pulpotomies in which the formocresol pellet was sealed in the primary tooth for 7 days. It was removed on the seventh day, the base was placed, and the tooth was restored. Two venous blood samples (6–8 ml) were collected from each child; the first was prior to pulp therapy, and the second was 7 days later. Two thousand metaphases were analyzed. The level of significance adopted for the statistics was P < 0.05, and a random effects meta-analysis was performed combining this and two previous studies. There was no significant difference found in the metaphase analysis between the blood samples taken before and after the pulpotomy treatment (Wilcoxon signed rank test); however, the meta-analysis showed a significant difference between the combined studies. This study did not reveal any mutagenic effects, but based on the combined meta-analysis, we recommend the careful use of formocresol. This research helps to bring scientific evidence of the safe use of formocresol in deciduous pulpotomy treatments.",
"title": ""
},
{
"docid": "8f9e5d1288ca365e7b5350b10e86a54b",
"text": "While developing a program to render Voronoi diagrams, I accidentally produced a strange and surprising image. The unexpected behaviour turned out to be caused by a combination of reasons from signal processing and computer architecture. I describe the process that led to the pattern, explain its structure, and display many of the wonderful designs that can be produced from this and related techniques.",
"title": ""
},
{
"docid": "23afd4bc218b97037a25f5373b800c00",
"text": "Multiconnection attacks such as DoS, probe, flooding, etc., have become common and attackers have come out with sophisticated techniques as well as tools to launch variants of such attacks. This growing amount of attack and sophistication has given rise to the increasing need of efficient detection algorithm. To test and compare the performances of the proposed detection algorithms, benchmark datasets are required to represent the dynamic nature of the network. Though certain benchmark datasets are available, most datasets are either synthetic or contains suppressed information. In this paper, we introduce SSENet-2014 dataset which is generated in a real network environment. The attacks were generated using attack tools while carrying out normal activities. The description of the SSENet-2014 dataset is given. Then, a comparison is carried out with the most popular intrusion detection dataset, 10% KDD Cup 99. Two clustering approaches of K Means and Self Organizing Map (SOM) have been used in our experiments. Box plot is used to analyze the attributes of the two datasets. The results confirm the variability existing in the attribute values of 10% KDD Cup 99 and SSENet-2014 dataset. Also, it can be seen that SSENet-2014 dataset generated from a real network varies considerably from 10% KDD Cup 99 which is generated from simulated traffic.",
"title": ""
},
{
"docid": "d3e65fbcc3484f304f78039731f2ba30",
"text": "Rademacher complexity is often used to characterize the learnability of a hypothesis class and is known to be related to the class size. We leverage this observation and introduce a new technique for estimating the size of an arbitrary weighted set, defined as the sum of weights of all elements in the set. Our technique provides upper and lower bounds on a novel generalization of Rademacher complexity to the weighted setting in terms of the weighted set size. This generalizes Massart’s Lemma, a known upper bound on the Rademacher complexity in terms of the unweighted set size. We show that the weighted Rademacher complexity can be estimated by solving a randomly perturbed optimization problem, allowing us to derive high-probability bounds on the size of any weighted set. We apply our method to the problems of calculating the partition function of an Ising model and computing propositional model counts (#SAT). Our experiments demonstrate that we can produce tighter bounds than competing methods in both the weighted and unweighted settings.",
"title": ""
},
{
"docid": "27f7025c2ee602b5ad2dee830836bbef",
"text": "Arsenic contamination of rice is widespread, but the rhizosphere processes influencing arsenic attenuation remain unresolved. In particular, the formation of Fe plaque around rice roots is thought to be an important barrier to As uptake, but the relative importance of this mechanism is not well characterized. Here we elucidate the colocalization of As species and Fe on rice roots with variable Fe coatings; we used a combination of techniques--X-ray fluorescence imaging, μXANES, transmission X-ray microscopy, and tomography--for this purpose. Two dominant As species were observed in fine roots-inorganic As(V) and As(III) -with minor amounts of dimethylarsinic acid (DMA) and arsenic trisglutathione (AsGlu(3)). Our investigation shows that variable Fe plaque formation affects As entry into rice roots. In roots with Fe plaque, As and Fe were strongly colocated around the root; however, maximal As and Fe were dissociated and did not encapsulate roots that had minimal Fe plaque. Moreover, As was not exclusively associated with Fe plaque in the rice root system; Fe plaque does not coat many of the young roots or the younger portion of mature roots. Young, fine roots, important for solute uptake, have little to no iron plaque. Thus, Fe plaque does not directly intercept (and hence restrict) As supply to and uptake by rice roots but rather serves as a bulk scavenger of As predominantly near the root base.",
"title": ""
},
{
"docid": "3100f5d0ed870be38770caf729798624",
"text": "Our research objective is to facilitate the identification of true input manipulation vulnerabilities via the combination of static analysis, runtime detection, and automatic testing. We propose an approach for SQL injection vulnerability detection, automated by a prototype tool SQLInjectionGen. We performed case studies on two small web applications for the evaluation of our approach compared to static analysis for identifying true SQL injection vulnerabilities. In our case study, SQLInjectionGen had no false positives, but had a small number of false negatives while the static analysis tool had a false positive for every vulnerability that was actually protected by a white or black list.",
"title": ""
},
{
"docid": "bcee490d287e146ff1c4fe7f1dee2cbf",
"text": "Biometrics is a growing technology, which has been widely used in forensics, secured access and prison security. A biometric system is fundamentally a pattern recognition system that recognizes a person by determining the authentication by using his different biological features i.e. Fingerprint, retina-scan, iris scan, hand geometry, and face recognition are leading physiological biometrics and behavioral characteristic are Voice recognition, keystroke-scan, and signature-scan. In this paper different biometrics techniques such as Iris scan, retina scan and face recognition techniques are discussed. Keyword: Biometric, Biometric techniques, Eigenface, Face recognition.",
"title": ""
},
{
"docid": "84750fa3f3176d268ae85830a87f7a24",
"text": "Context: The pull-based model, widely used in distributed software development, offers an extremely low barrier to entry for potential contributors (anyone can submit of contributions to any project, through pull-requests). Meanwhile, the project’s core team must act as guardians of code quality, ensuring that pull-requests are carefully inspected before being merged into the main development line. However, with pull-requests becoming increasingly popular, the need for qualified reviewers also increases. GitHub facilitates this, by enabling the crowd-sourcing of pull-request reviews to a larger community of coders than just the project’s core team, as a part of their social coding philosophy. However, having access to more potential reviewers does not necessarily mean that it’s easier to find the right ones (the “needle in a haystack” problem). If left unsupervised, this process may result in communication overhead and delayed pull-request processing. Objective: This study aims to investigate whether and how previous approaches used in bug triaging and code review can be adapted to recommending reviewers for pull-requests, and how to improve the recommendation performance. Method: First, we extend three typical approaches used in bug triaging and code review for the new challenge of assigning reviewers to pull-requests. Second, we analyze social relations between contributors and reviewers, and propose a novel approach by mining each project’s comment networks (CNs). Finally, we combine the CNs with traditional approaches, and evaluate the effectiveness of all these methods on 84 GitHub projects through both quantitative and qualitative analysis. Results: We find that CN-based recommendation can achieve, by itself, similar performance as the traditional approaches. However, the mixed approaches can achieve significant improvements compared to using either of them independently. Conclusion: Our study confirms that traditional approaches to bug triaging and code review are feasible for pull-request reviewer recommendations on GitHub. Furthermore, their performance can be improved significantly by combining them with information extracted from prior social interactions between developers on GitHub. These results prompt for novel tools to support process automation in social coding platforms, that combine social (e.g., common interests among developers) and technical factors (e.g., developers’ expertise). © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ddc3241c09a33bde1346623cf74e6866",
"text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.",
"title": ""
},
{
"docid": "7f110e4769b996de13afe63962bcf2d2",
"text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.",
"title": ""
},
{
"docid": "c02a55b5a3536f3ab12c65dd0d3037ef",
"text": "The emergence of large-scale receptor-based systems has enabled applications to execute complex business logic over data generated from monitoring the physical world. An important functionality required by these applications is the detection and response to complex events, often in real-time. Bridging the gap between low-level receptor technology and such high-level needs of applications remains a significant challenge.We demonstrate our solution to this problem in the context of HiFi, a system we are building to solve the data management problems of large-scale receptor-based systems. Specifically, we show how HiFi generates simple events out of receptor data at its edges and provides high-functionality complex event processing mechanisms for sophisticated event detection using a real-world library scenario.",
"title": ""
},
{
"docid": "3994b51e9b9ed5aec98ed33e541a8e8c",
"text": "The development of relational database management systems served to focus the data management community for decades, with spectacular results. In recent years, however, the rapidly-expanding demands of \"data everywhere\" have led to a field comprised of interesting and productive efforts, but without a central focus or coordinated agenda. The most acute information management challenges today stem from organizations (e.g., enterprises, government agencies, libraries, \"smart\" homes) relying on a large number of diverse, interrelated data sources, but having no way to manage their dataspaces in a convenient, integrated, or principled fashion. This paper proposes dataspaces and their support systems as a new agenda for data management. This agenda encompasses much of the work going on in data management today, while posing additional research objectives.",
"title": ""
},
{
"docid": "7e71c614713dce3513ebc1f1aa07579a",
"text": "Because of the long colonial history of Filipinos and the highly Americanized climate of postcolonial Philippines, many scholars from various disciplines have speculated that colonialism and its legacies may play major roles in Filipino emigration to the United States. However, there are no known empirical studies in psychology that specifically investigate whether colonialism and its effects have influenced the psychological experiences of Filipino American immigrants prior to their arrival in the United States. Further, there is no existing empirical study that specifically investigates the extent to which colonialism and its legacies continue to influence Filipino American immigrants' mental health. Thus, using interviews (N = 6) and surveys (N = 219) with Filipino American immigrants, two studies found that colonialism and its consequences are important factors to consider when conceptualizing the psychological experiences of Filipino American immigrants. Specifically, the findings suggest that (a) Filipino American immigrants experienced ethnic and cultural denigration in the Philippines prior to their U.S. arrival, (b) ethnic and cultural denigration in the Philippines and in the United States may lead to the development of colonial mentality (CM), and (c) that CM may have negative mental health consequences among Filipino American immigrants. The two studies' findings suggest that the Filipino American immigration experience cannot be completely captured by the voluntary immigrant narrative, as they provide empirical support to the notion that the Filipino American immigration experience needs to be understood in the context of colonialism and its most insidious psychological legacy- CM.",
"title": ""
},
{
"docid": "95a845c61fd1e98d62f1ab175d167276",
"text": "The ability to transfer knowledge from previous experiences is critical for an agent to rapidly adapt to different environments and effectively learn new tasks. In this paper we conduct an empirical study of Deep Q-Networks (DQNs) where the agent is evaluated on previously unseen environments. We show that we can train a robust network for navigation in 3D environments and demonstrate its effectiveness in generalizing to unknown maps with unknown background textures. We further investigate the effectiveness of pretraining and finetuning for transferring knowledge between various scenarios in 3D environments. In particular, we show that the features learnt by the navigation network can be effectively utilized to transfer knowledge between a diverse set of tasks, such as object collection, deathmatch, and self-localization.",
"title": ""
},
{
"docid": "6361a7c2a89c847792c7227bfb5c2391",
"text": "In this paper, we propose a hybrid methodology based on a combination of analytical, numerical and machine learning methods for performing dexterous, in-hand manipulation with simple, adaptive robot hands. A constrained optimization scheme utilizes analytical models that describe the kinematics of adaptive hands and classic conventions for modelling quasistatically the manipulation problem, providing intuition about the problem mechanics. A machine learning (ML) scheme is used in order to split the problem space, deriving task-specific models that account for difficult to model, dynamic phenomena (e.g., slipping). In this respect, the ML scheme: 1) employs the simulation module in order to explore the feasible manipulation paths for a specific hand-object system, 2) feeds the feasible paths to an experimental setup that collects manipulation data in an automated fashion, 3) uses clustering techniques in order to group together similar manipulation trajectories, 4) trains a set of task-specific manipulation models and 5) uses classification techniques in order to trigger a task-specific model based on the user provided task specifications. The efficacy of the proposed methodology is experimentally validated using various adaptive robot hands in 2D and 3D in-hand manipulation tasks.",
"title": ""
},
{
"docid": "3dcb6a88aafb7a9c917ccdd306768f51",
"text": "Protein quality describes characteristics of a protein in relation to its ability to achieve defined metabolic actions. Traditionally, this has been discussed solely in the context of a protein's ability to provide specific patterns of amino acids to satisfy the demands for synthesis of protein as measured by animal growth or, in humans, nitrogen balance. As understanding of protein's actions expands beyond its role in maintaining body protein mass, the concept of protein quality must expand to incorporate these newly emerging actions of protein into the protein quality concept. New research reveals increasingly complex roles for protein and amino acids in regulation of body composition and bone health, gastrointestinal function and bacterial flora, glucose homeostasis, cell signaling, and satiety. The evidence available to date suggests that quality is important not only at the minimum Recommended Dietary Allowance level but also at higher intakes. Currently accepted methods for measuring protein quality do not consider the diverse roles of indispensable amino acids beyond the first limiting amino acid for growth or nitrogen balance. As research continues to evolve in assessing protein's role in optimal health at higher intakes, there is also need to continue to explore implications for protein quality assessment.",
"title": ""
},
{
"docid": "500202f494dc3769fdb0c7de98aec9c7",
"text": "Clocked comparators have found widespread use in noise sensitive applications including analog-to-digital converters, wireline receivers, and memory bit-line detectors. However, their nonlinear, time-varying dynamics resulting in discrete output levels have discouraged the use of traditional linear time-invariant (LTI) small-signal analysis and noise simulation techniques. This paper describes a linear, time-varying (LTV) model of clock comparators that can accurately predict the decision error probability without resorting to more general stochastic system models. The LTV analysis framework in conjunction with the linear, periodically time-varying (LPTV) simulation algorithms available from RF circuit simulators can provide insights into the intrinsic sampling and decision operations of clock comparators and the major contribution sources to random decision errors. Two comparators are simulated and compared with laboratory measurements. A 90-nm CMOS comparator is measured to have an equivalent input-referred random noise of 0.73 mVrms for dc inputs, matching simulation results with a short channel excess noise factor ¿ = 2.",
"title": ""
},
{
"docid": "feb565b4decfdb3d627ab62b7cfcae8f",
"text": "Though enterprise resource planning (ERP) has gained some prominence in the information systems (IS) literature over the past few years and is a signi®cant phenomenon in practice, through (a) historical analysis, (b) meta-analysis of representative IS literature, and (c) a survey of academic experts, we reveal dissenting views on the phenomenon. Given this diversity of perspectives, it is unlikely that at this stage a broadly agreed de®nition of ERP can be achieved. We thus seek to increase awareness of the issues and stimulate further discussion, with the ultimate aim being to: (1) aid communication amongst researchers and between researchers and practitioners; (2) inform development of teaching materials on ERP and related concepts in university curricula and in commercial education and training; and (3) aid communication amongst clients, consultants and vendors. Increased transparency of the ERP-concept within IS may also bene®t other aligned ®elds of knowledge.",
"title": ""
}
] |
scidocsrr
|
8277fdf8534c181364996aceb7fbdcda
|
Bidirectional Long Short-Term Memory Variational Autoencoder
|
[
{
"docid": "e10dbbc6b3381f535ff84a954fcc7c94",
"text": "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of Shotton et al. [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skeletal representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skeletal representation lies in the Lie group SE(3)×.. .×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skeletal representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.",
"title": ""
},
{
"docid": "8c70f1af7d3132ca31b0cf603b7c5939",
"text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.",
"title": ""
},
{
"docid": "1d6e23fedc5fa51b5125b984e4741529",
"text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.",
"title": ""
},
{
"docid": "695af0109c538ca04acff8600d6604d4",
"text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"title": ""
}
] |
[
{
"docid": "aa74720aa2d191b9eb25104ee3a33b1e",
"text": "We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitable subset of the captured frames for further processing, reducing the dataset size by several orders of magnitude. A camera calibration step is applied to recover the camera response function, the absolute camera orientation, and to compute the light directions for each image. Finally, we describe a new photometric stereo technique for non-Lambertian scenes and unknown light source intensities to recover normal maps and spatially varying materials of the scene.",
"title": ""
},
{
"docid": "b3e1bdd7cfca17782bde698297e191ab",
"text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.",
"title": ""
},
{
"docid": "e751fdbc980c36b95c81f0f865bb5033",
"text": "In order to match shoppers with desired products and provide personalized promotions, whether in online or offline shopping worlds, it is critical to model both consumer preferences and price sensitivities simultaneously. Personalized preferences have been thoroughly studied in the field of recommender systems, though price (and price sensitivity) has received relatively little attention. At the same time, price sensitivity has been richly explored in the area of economics, though typically not in the context of developing scalable, working systems to generate recommendations. In this study, we seek to bridge the gap between large-scale recommender systems and established consumer theories from economics, and propose a nested feature-based matrix factorization framework to model both preferences and price sensitivities. Quantitative and qualitative results indicate the proposed personalized, interpretable and scalable framework is capable of providing satisfying recommendations (on two datasets of grocery transactions) and can be applied to obtain economic insights into consumer behavior.",
"title": ""
},
{
"docid": "ee169784b96c5d1cf77d1119f0c55964",
"text": "The increasing amount of machinereadable data available in the context of the Semantic Web creates a need for methods that transform such data into human-comprehensible text. In this paper we develop and evaluate a Natural Language Generation (NLG) system that converts RDF data into natural language text based on an ontology and an associated ontology lexicon. While it follows a classical NLG pipeline, it diverges from most current NLG systems in that it exploits an ontology lexicon in order to capture context-specific lexicalisations of ontology concepts, and combines the use of such a lexicon with the choice of lexical items and syntactic structures based on statistical information extracted from a domain-specific corpus. We apply the developed approach to the cooking domain, providing both an ontology and an ontology lexicon in lemon format. Finally, we evaluate fluency and adequacy of the generated recipes with respect to two target audiences: cooking novices and advanced cooks.",
"title": ""
},
{
"docid": "b6b553e952dd3ccc79832a6cc4752885",
"text": "OBJECTIVE\nThe aim of the present study was to analyze the soft tissue barrier formed to implant abutments made of different materials.\n\n\nMATERIAL AND METHODS\nSix Labrador dogs, about 1 year old, were used. All mandibular premolars and the first, second and third maxillary premolars were extracted. Three months later four implants (OsseoSpeed, 4.5 x 9 mm, Astra Tech Dental, Mölndal, Sweden) were placed in the edentulous premolar region on one side of the mandible and healing abutments were connected. One month later, the healing abutments were disconnected and four new abutments were placed in a randomized order. Two of the abutments were made of titanium (Ti), while the remaining abutments were made of ZrO(2) or AuPt-alloy. A 5-months plaque control program was initiated. Three months after implant surgery, the implant installation procedure and the subsequent abutment shift were repeated in the contra-lateral mandibular region. Two months later, the dogs were euthanized and biopsies containing the implant and the surrounding soft and hard peri-implant tissues were collected and prepared for histological analysis.\n\n\nRESULTS\nIt was demonstrated that the soft tissue dimensions at Ti- and ZrO(2) abutments remained stable between 2 and 5 months of healing. At Au/Pt-alloy abutment sites, however, an apical shift of the barrier epithelium and the marginal bone occurred between 2 and 5 months of healing. In addition, the 80-mum-wide connective tissue zone lateral to the Au/Pt-alloy abutments contained lower amounts of collagen and fibroblasts and larger fractions of leukocytes than the corresponding connective tissue zone of abutments made of Ti and ZrO(2).\n\n\nCONCLUSION\nIt is suggested that the soft tissue healing to abutments made of titanium and ZrO(2) is different to that at abutments made of AuPt-alloy.",
"title": ""
},
{
"docid": "09dfc388fc9eec17c2ec9dd5002af8c3",
"text": "Having effective visualizations of filesystem provenance data is valuable for understanding its complex hierarchical structure. The most common visual representation of provenance data is the node-link diagram. While effective for understanding local activity, the node-link diagram fails to offer a high-level summary of activity and inter-relationships within the data. We present a new tool, InProv, which displays filesystem provenance with an interactive radial-based tree layout. The tool also utilizes a new time-based hierarchical node grouping method for filesystem provenance data we developed to match the user's mental model and make data exploration more intuitive. We compared InProv to a conventional node-link based tool, Orbiter, in a quantitative evaluation with real users of filesystem provenance data including provenance data experts, IT professionals, and computational scientists. We also compared in the evaluation our new node grouping method to a conventional method. The results demonstrate that InProv results in higher accuracy in identifying system activity than Orbiter with large complex data sets. The results also show that our new time-based hierarchical node grouping method improves performance in both tools, and participants found both tools significantly easier to use with the new time-based node grouping method. Subjective measures show that participants found InProv to require less mental activity, less physical activity, less work, and is less stressful to use. Our study also reveals one of the first cases of gender differences in visualization; both genders had comparable performance with InProv, but women had a significantly lower average accuracy (56%) compared to men (70%) with Orbiter.",
"title": ""
},
{
"docid": "3ff06c4ecf9b8619150c29c9c9a940b9",
"text": "It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop.",
"title": ""
},
{
"docid": "3754b5c86e0032382f144ded5f1ca4d8",
"text": "Use and users have an important and acknowledged role to most designers of interactive systems. Nevertheless any touch of user hands does not in itself secure development of meaningful artifacts. In this article we stress the need for a professional PD practice in order to yield the full potentiality of user involvement. We suggest two constituting elements of such a professional PD practice. The existence of a shared 'where-to' and 'why' artifact and an ongoing reflection and off-loop reflection among practitioners in the PD process.",
"title": ""
},
{
"docid": "4309fd090591a107bce978d61aff6a34",
"text": "Regular exercise training is recognized as a powerful tool to improve work capacity, endothelial function and the cardiovascular risk profile in obesity, but it is unknown which of high-intensity aerobic exercise, moderate-intensity aerobic exercise or strength training is the optimal mode of exercise. In the present study, a total of 40 subjects were randomized to high-intensity interval aerobic training, continuous moderate-intensity aerobic training or maximal strength training programmes for 12 weeks, three times/week. The high-intensity group performed aerobic interval walking/running at 85-95% of maximal heart rate, whereas the moderate-intensity group exercised continuously at 60-70% of maximal heart rate; protocols were isocaloric. The strength training group performed 'high-intensity' leg press, abdominal and back strength training. Maximal oxygen uptake and endothelial function improved in all groups; the greatest improvement was observed after high-intensity training, and an equal improvement was observed after moderate-intensity aerobic training and strength training. High-intensity aerobic training and strength training were associated with increased PGC-1alpha (peroxisome-proliferator-activated receptor gamma co-activator 1alpha) levels and improved Ca(2+) transport in the skeletal muscle, whereas only strength training improved antioxidant status. Both strength training and moderate-intensity aerobic training decreased oxidized LDL (low-density lipoprotein) levels. Only aerobic training decreased body weight and diastolic blood pressure. In conclusion, high-intensity aerobic interval training was better than moderate-intensity aerobic training in improving aerobic work capacity and endothelial function. An important contribution towards improved aerobic work capacity, endothelial function and cardiovascular health originates from strength training, which may serve as a substitute when whole-body aerobic exercise is contra-indicated or difficult to perform.",
"title": ""
},
{
"docid": "cf639e8a3037d94d2e110a2a11411dc6",
"text": "Memory-based collaborative filtering (CF) has been studied extensively in the literature and has proven to be successful in various types of personalized recommender systems. In this paper, we develop a probabilistic framework for memory-based CF (PMCF). While this framework has clear links with classical memory-based CF, it allows us to find principled solutions to known problems of CF-based recommender systems. In particular, we show that a probabilistic active learning method can be used to actively query the user, thereby solving the \"new user problem.\" Furthermore, the probabilistic framework allows us to reduce the computational cost of memory-based CF by working on a carefully selected subset of user profiles, while retaining high accuracy. We report experimental results based on two real-world data sets, which demonstrate that our proposed PMCF framework allows an accurate and efficient prediction of user preferences.",
"title": ""
},
{
"docid": "87c7875416503ab1f12de90a597959a4",
"text": "Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.",
"title": ""
},
{
"docid": "cb2c0c4e5454c1302a9569b687a50818",
"text": "Employee turnover is a serious concern in knowledge based organizations. When employees leave an organization, they carry with them invaluable tacit knowledge which is often the source of competitive advantage for the business. In order for an organization to continually have a higher competitive advantage over its competition, it should make it a duty to minimize employee attrition. This study identifies employee related attributes that contribute to the prediction of employees’ attrition in organizations. Three hundred and nine (309) complete records of employees of one of the Higher Institutions in Nigeria who worked in and left the institution between 1978 and 2006 were used for the study. The demographic and job related records of the employee were the main data which were used to classify the employee into some predefined attrition classes. Waikato Environment for Knowledge Analysis (WEKA) and See5 for Windows were used to generate decision tree models and rule-sets. The results of the decision tree models and rule-sets generated were then used for developing a a predictive model that was used to predict new cases of employee attrition. A framework for a software tool that can implement the rules generated in this study was also proposed.",
"title": ""
},
{
"docid": "6059b4bbf5d269d0a5f1f596b48c1acb",
"text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.",
"title": ""
},
{
"docid": "15208617386aeb77f73ca7c2b7bb2656",
"text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.",
"title": ""
},
{
"docid": "fc74dadf88736675c860109a95fcdda1",
"text": "This paper presents the preliminary work done towards the development of a Gender Recognition System that can be incorporated into the Hindi Automatic Speech Recognition (ASR) System. Gender Recognition (GR) can help in the development of speaker-independent speech recognition systems. This paper presents a general approach to identifying feature vectors that effectively distinguish gender of a speaker from Hindi phoneme utterances. 10 vowels and 5 nasals of the Hindi language were studied for their effectiveness in identifying gender of the speaker. All the 10 vowel Phonemes performed well, while b] bZ] Å] ,] ,s] vks and vkS showed excellent gender distinction performance. All five nasals 3] ́] .k] u and e which were tested, showed a recognition accuracy of almost 100%. The Mel Frequency Cepstral Coefficients (MFCC) are widely used in ASR. The choice of MFCC as features in Gender Recognition will avoid additional computation. The effect of the MFCC feature vector dimension on the GR accuracy was studied and the findings presented. General Terms Automatic speech recognition in Hindi",
"title": ""
},
{
"docid": "6756ede63355b29d9ca5569dab62db26",
"text": "This paper presents an approach for the robust recognition of a complex and dynamic driving environment, such as an urban area, using on-vehicle multi-layer LIDAR. The multi-layer LIDAR alleviates the consequences of occlusion by vertical scanning; it can detect objects with different heights simultaneously, and therefore the influence of occlusion can be curbed. The road environment recognition algorithm proposed in this paper consists of three procedures: ego-motion estimation, construction and updating of a 3-dimensional local grid map, and the detection and tracking of moving objects. The integration of these procedures enables us to estimate ego-motion accurately, along with the positions and states of moving objects, the free area where vehicles and pedestrians can move freely, and the ‘unknown’ area, which have never previously been observed in a road environment.",
"title": ""
},
{
"docid": "ab23f66295574368ccd8fc4e1b166ecc",
"text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.",
"title": ""
},
{
"docid": "653e12c8242f5dfc1523fe9e43cec9a6",
"text": "The sentiment index of market participants has been extensively used for stock market prediction in recent years. Many financial information vendors also provide it as a service. However, utilizing market sentiment under the asset allocation framework has been rarely discussed. In this article, we investigate the role of market sentiment in an asset allocation problem. We propose to compute sentiment time series from social media with the help of natural language processing techniques. A novel neural network design, built upon an ensemble of evolving clustering and long short-term memory, is used to formalize sentiment information into market views. These views are later integrated into modern portfolio theory through a Bayesian approach. We analyze the performance of this asset allocation model from many aspects, such as stability of portfolios, computing of sentiment time series, and profitability in our simulations. Experimental results show that our model outperforms some of the most successful forecasting techniques. Thanks to the introduction of the evolving clustering method, the estimation accuracy of market views is significantly improved.",
"title": ""
},
{
"docid": "d3c8903fed280246ea7cb473ee87c0e7",
"text": "Reaction time has a been a favorite subject of experimental psychologists since the middle of the nineteenth century. However, most studies ask questions about the organization of the brain, so the authors spend a lot of time trying to determine if the results conform to some mathematical model of brain activity. This makes these papers hard to understand for the beginning student. In this review, I have ignored these brain organization questions and summarized the major literature conclusions that are applicable to undergraduate laboratories using my Reaction Time software. I hope this review helps you write a good report on your reaction time experiment. I also apologize to reaction time researchers for omissions and oversimplifications.",
"title": ""
},
{
"docid": "34118709a36ba09a822202753cbff535",
"text": "Our healthcare sector daily collects a huge data including clinical examination, vital parameters, investigation reports, treatment follow-up and drug decisions etc. But very unfortunately it is not analyzed and mined in an appropriate way. The Health care industry collects the huge amounts of health care data which unfortunately are not “mined” to discover hidden information for effective decision making for health care practitioners. Data mining refers to using a variety of techniques to identify suggest of information or decision making knowledge in database and extracting these in a way that they can put to use in areas such as decision support , Clustering ,Classification and Prediction. This paper has developed a Computer-Based Clinical Decision Support System for Prediction of Heart Diseases (CCDSS) using Naïve Bayes data mining algorithm. CCDSS can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, spO2,chest pain type, heart rate, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. CCDSS is Webbased, user-friendly, scalable, reliable and expandable. It is implemented on the PHPplatform. Keywords—Computer-Based Clinical Decision Support System(CCDSS), Heart disease, Data mining, Naïve Bayes.",
"title": ""
}
] |
scidocsrr
|
69494a0817f3d992c08b806da174099f
|
Automatic Coupling of Answer Extraction and Information Retrieval
|
[
{
"docid": "39717d9a00f28d99bbf0b1f4fd49f90d",
"text": "Question Answering (QA) systems are often built modularly, with a text retrieval component feeding forward into an answer extraction component. Conventional wisdom suggests that, the higher the quality of the retrieval results used as input to the answer extraction module, the better the extracted answers, and hence system accuracy, will be. This turns out to be a poor assumption, because text retrieval and answer extraction are tightly coupled. Improvements in retrieval quality can be lost at the answer extraction module, which can not necessarily recognize the additional answer candidates provided by improved retrieval. Going forward, to improve accuracy on the QA task, systems will need greater coordination between text retrieval and answer extraction modules.",
"title": ""
}
] |
[
{
"docid": "90f1e303325d2d9f56fdcc905924c7bf",
"text": "giving a statistic image for each contrast. P values for activations in the amygdala were corrected for the volume of brain analysed (specified as a sphere with radius 8 mm) 29. Anatomical localization for the group mean-condition-specific activations are reported in standard space 28. In all cases, the localization of the group mean activations was confirmed by registration with the subject's own MRIs. In an initial conditioning phase immediately before scanning, subjects viewed a sequence of greyscale images of four faces taken from a standard set of pictures of facial affect 30. Images of a single face were presented on a computer monitor screen for 75 ms at intervals of 15–25 s (mean 20 s). Each of the four faces was shown six times in a pseudorandom order. Two of the faces had angry expressions (A1 and A2), the other two being neutral (N1 and N2). One of the angry faces (CS+) was always followed by a 1-s 100-dB burst of white noise. In half of the subjects A1 was the CS+ face; in the other half, A2 was used. None of the other faces was ever paired with the noise. Before each of the 12 scanning windows, which occurred at 8-min intervals, a shortened conditioning sequence was played consisting of three repetitions of the four faces. During the 90-s scanning window, which seamlessly followed the conditioning phase, 12 pairs of faces, consisting of a target and mask, were shown at 5-s intervals. The target face was presented for 30 ms and was immediately followed by the masking face for 45 ms (Fig. 1). These stimulus parameters remained constant throughout all scans and effectively prevented any reportable awareness of the target face (which might be a neutral face or an angry face). There were four different conditions (Fig. 1), masked conditioned, non-masked conditioned, masked unconditioned, and non-masked unconditioned. Throughout the experiment, subjects performed the same explicit task, which was to detect any occurrence, however fleeting, of the angry faces. Immediately before the first conditioning sequence, subjects were shown the two angry faces and were instructed, for each stimulus presentation, to press a response button with the index finger of the right hand if one the angry faces appeared, or another button with the middle finger of the right hand if they did not see either of the angry faces. Throughout the acquisition and extinction phases, subjects' SCRs were monitored to …",
"title": ""
},
{
"docid": "4019d3f46ec0ef42145d8d63b62a88d0",
"text": "Learning policies on data synthesized by models can in principle quench the thirst of reinforcement learning algorithms for large amounts of real experience, which is often costly to acquire. However, simulating plausible experience de novo is a hard problem for many complex environments, often resulting in biases for modelbased policy evaluation and search. Instead of de novo synthesis of data, here we assume logged, real experience and model alternative outcomes of this experience under counterfactual actions, i.e. actions that were not actually taken. Based on this, we propose the Counterfactually-Guided Policy Search (CF-GPS) algorithm for learning policies in POMDPs from off-policy experience. It leverages structural causal models for counterfactual evaluation of arbitrary policies on individual off-policy episodes. CF-GPS can improve on vanilla model-based RL algorithms by making use of available logged data to de-bias model predictions. In contrast to off-policy algorithms based on Importance Sampling which re-weight data, CF-GPS leverages a model to explicitly consider alternative outcomes, allowing the algorithm to make better use of experience data. We find empirically that these advantages translate into improved policy evaluation and search results on a non-trivial grid-world task. Finally, we show that CF-GPS generalizes the previously proposed Guided Policy Search and that reparameterization-based algorithms such Stochastic Value Gradient can be interpreted as counterfactual methods.",
"title": ""
},
{
"docid": "81a3f339ab0bb1d3b7fe710d44bde5a6",
"text": "What are the units of attention? In addition to standard models holding that attention can select spatial regions and visual features, recent work suggests that in some cases attention can directly select discrete objects. This paper reviews the state of the art with regard to such 'object-based' attention, and explores how objects of attention relate to locations, reference frames, perceptual groups, surfaces, parts, and features. Also discussed are the dynamic aspects of objecthood, including the question of how attended objects are individuated in time, and the possibility of attending to simple dynamic motions and events. The final sections of this review generalize these issues beyond vision science, to other modalities and fields such as auditory objects of attention and the infant's 'object concept'.",
"title": ""
},
{
"docid": "3cc0707cec7af22db42e530399e762a8",
"text": "While watching television, people increasingly consume additional content related to what they are watching. We consider the task of finding video content related to a live television broadcast for which we leverage the textual stream of subtitles associated with the broadcast. We model this task as a Markov decision process and propose a method that uses reinforcement learning to directly optimize the retrieval effectiveness of queries generated from the stream of subtitles. Our dynamic query modeling approach significantly outperforms state-of-the-art baselines for stationary query modeling and for text-based retrieval in a television setting. In particular we find that carefully weighting terms and decaying these weights based on recency significantly improves effectiveness. Moreover, our method is highly efficient and can be used in a live television setting, i.e., in near real time.",
"title": ""
},
{
"docid": "2f02235636c5c0aecd8918cba512888d",
"text": "To determine whether an AIDS prevention mass media campaign influenced risk perception, self-efficacy and other behavioural predictors. We used household survey data collected from 2,213 sexually experienced male and female Kenyans aged 15-39. Respondents were administered a questionnaire asking them about their exposure to branded and generic mass media messages concerning HIV/AIDS and condom use. They were asked questions concerning their personal risk perception, self-efficacy, condom effectiveness, condom availability, and their embarrassment in obtaining condoms. Logistic regression analysis was used to determine the impact of exposure to mass media messages on these predictors of behaviour change. Those exposed to branded advertising messages were significantly more likely to consider themselves at higher risk of acquiring HIV and to believe in the severity of AIDS. Exposure to branded messages was also associated with a higher level of personal self-efficacy, a greater belief in the efficacy of condoms, a lower level of perceived difficulty in obtaining condoms and reduced embarrassment in purchasing condoms. Moreover, there was a dose-response relationship: a higher intensity of exposure to advertising was associated with more positive outcomes. Exposure to generic advertising messages was less frequently associated with positive health beliefs and these relationships were also weaker. Branded mass media campaigns that promote condom use as an attractive lifestyle choice are likely to contribute to the development of perceptions that are conducive to the adoption of condom use.",
"title": ""
},
{
"docid": "a25f169d851ff02380d139148f7429f6",
"text": "The refinement of checksums is an essential grand challenge. Given the current status of lossless information, theorists clearly desire the refinement of the locationidentity split, which embodies the essential principles of operating systems. Our focus in this paper is not on whether IPv4 can be made relational, constant-time, and decentralized, but rather on proposing new linear-time symmetries (YEW).",
"title": ""
},
{
"docid": "2337ac0f10020d7435ea0acc82d4af98",
"text": "With recent advances in brain imaging and neurosurgical techniques, there has been a renewed interest in the surgical separation of craniopagus twins. Successful separation in recent cases, along with widespread publicity, has attracted craniopagus twins from all over the world to be referred to pediatric neurosurgical centers for evaluation and consideration for surgical separation. It has become apparent, however, that the most critical decisions in surgical planning are related to separation of the blood supply to the conjoined brains. In fact, in craniopagus twins that survive pregnancy or the first few days of life, there is usually little shared brain tissue. The shared blood supply is far and away the more critical issue. It is very difficult to successfully separate craniopagus twins in one surgical procedure. Staged separation, with gradual re-routing of the shared blood supply, has been a successful alternative. We discuss here our experience with three sets of craniopagus twins and our approach to staged separation.",
"title": ""
},
{
"docid": "7f605604647564e67c5d910003a9707a",
"text": "Given a query consisting of a mention (name string) and a background document, entity disambiguation calls for linking the mention to an entity from reference knowledge base like Wikipedia. Existing studies typically use hand-crafted features to represent mention, context and entity, which is laborintensive and weak to discover explanatory factors of data. In this paper, we address this problem by presenting a new neural network approach. The model takes consideration of the semantic representations of mention, context and entity, encodes them in continuous vector space and effectively leverages them for entity disambiguation. Specifically, we model variable-sized contexts with convolutional neural network, and embed the positions of context words to factor in the distance between context word and mention. Furthermore, we employ neural tensor network to model the semantic interactions between context and mention. We conduct experiments for entity disambiguation on two benchmark datasets from TAC-KBP 2009 and 2010. Experimental results show that our method yields state-of-the-art performances on both datasets.",
"title": ""
},
{
"docid": "411f47c2edaaf3696d44521d4a97eb28",
"text": "An energy-efficient 3 Gb/s current-mode interface scheme is proposed for on-chip global interconnects and silicon interposer channels. The transceiver core consists of an open-drain transmitter with one-tap pre-emphasis and a current sense amplifier load as the receiver. The current sense amplifier load is formed by stacking a PMOS diode stage and a cross-coupled NMOS stage, providing an optimum current-mode receiver without any bias current. The proposed scheme is verified with two cases of transceivers implemented in 65 nm CMOS. A 10 mm point-to-point data-only channel shows an energy efficiency of 9.5 fJ/b/mm, and a 20 mm four-drop source-synchronous link achieves 29.4 fJ/b/mm including clock and data channels.",
"title": ""
},
{
"docid": "ad00866e5bae76020e02c6cc76360ec8",
"text": "The CASAS architecture facilitates the development and implementation of future smart home technologies by offering an easy-to-install lightweight design that provides smart home capabilities out of the box with no customization or training.",
"title": ""
},
{
"docid": "3940ccc6f409140582680de1fdc0f610",
"text": "Fermentation of food components by microbes occurs both during certain food production processes and in the gastro-intestinal tract. In these processes specific compounds are produced that originate from either biotransformation reactions or biosynthesis, and that can affect the health of the consumer. In this review, we summarize recent advances highlighting the potential to improve the nutritional status of a fermented food by rational choice of food-fermenting microbes. The vast numbers of microbes residing in the human gut, the gut microbiota, also give rise to a broad array of health-active molecules. Diet and functional foods are important modulators of the gut microbiota activity that can be applied to improve host health. A truly multidisciplinary approach is required to increase our understanding of the molecular mechanisms underlying health beneficial effects that arise from the interaction of diet, microbes and the human body.",
"title": ""
},
{
"docid": "c98d96d2263aa1c701accae83b451fca",
"text": "Cannabidiol (CBD), a major phytocannabinoid constituent of cannabis, is attracting growing attention in medicine for its anxiolytic, antipsychotic, antiemetic and anti-inflammatory properties. However, up to this point, a comprehensive literature review of the effects of CBD in humans is lacking. The aim of the present systematic review is to examine the randomized and crossover studies that administered CBD to healthy controls and to clinical patients. A systematic search was performed in the electronic databases PubMed and EMBASE using the key word \"cannabidiol\". Both monotherapy and combination studies (e.g., CBD + ∆9-THC) were included. A total of 34 studies were identified: 16 of these were experimental studies, conducted in healthy subjects, and 18 were conducted in clinical populations, including multiple sclerosis (six studies), schizophrenia and bipolar mania (four studies), social anxiety disorder (two studies), neuropathic and cancer pain (two studies), cancer anorexia (one study), Huntington's disease (one study), insomnia (one study), and epilepsy (one study). Experimental studies indicate that a high-dose of inhaled/intravenous CBD is required to inhibit the effects of a lower dose of ∆9-THC. Moreover, some experimental and clinical studies suggest that oral/oromucosal CBD may prolong and/or intensify ∆9-THC-induced effects, whereas others suggest that it may inhibit ∆9-THC-induced effects. Finally, preliminary clinical trials suggest that high-dose oral CBD (150-600 mg/d) may exert a therapeutic effect for social anxiety disorder, insomnia and epilepsy, but also that it may cause mental sedation. Potential pharmacokinetic and pharmacodynamic explanations for these results are discussed.",
"title": ""
},
{
"docid": "040329beb0f4688ced46d87a51dac169",
"text": "We present a characterization methodology for fast direct measurement of the charge accumulated on Floating Gate (FG) transistors of Flash EEPROM cells. Using a Scanning Electron Microscope (SEM) in Passive Voltage Contrast (PVC) mode we were able to distinguish between '0' and '1' bit values stored in each memory cell. Moreover, it was possible to characterize the remaining charge on the FG; thus making this technique valuable for Failure Analysis applications for data retention measurements in Flash EEPROM. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Only a relatively simple backside sample preparation is necessary for accessing the FG of memory transistors. The technique presented was successfully implemented on a 0.35 μm technology node microcontroller and a 0.21 μm smart card integrated circuit. We also show the ease of such technique to cover all cells of a memory (using intrinsic features of SEM) and to automate memory cells characterization using standard image processing technique.",
"title": ""
},
{
"docid": "a208e4f4e6092a731d4ec662c1cea1bc",
"text": "The CDMA channel with randomly and independently chosen spreading sequences accurately models the situation where pseudonoise sequences span many symbol periods. Furthermore, its analysis provides a comparison baseline for CDMA channels with deterministic signature waveforms spanning one symbol period. We analyze the spectral efficiency (total capacity per chip) as a function of the number of users, spreading gain, and signal-to-noise ratio, and we quantify the loss in efficiency relative to an optimally chosen set of signature sequences and relative to multiaccess with no spreading. White Gaussian background noise and equal-power synchronous users are assumed. The following receivers are analyzed: a) optimal joint processing, b) single-user matched filtering, c) decorrelation, and d) MMSE linear processing.",
"title": ""
},
{
"docid": "e1adb8ebfd548c2aca5110e2a9e8d667",
"text": "This paper introduces an active object detection and localization framework that combines a robust untextured object detection and 3D pose estimation algorithm with a novel next-best-view selection strategy. We address the detection and localization problems by proposing an edge-based registration algorithm that refines the object position by minimizing a cost directly extracted from a 3D image tensor that encodes the minimum distance to an edge point in a joint direction/location space. We face the next-best-view problem by exploiting a sequential decision process that, for each step, selects the next camera position which maximizes the mutual information between the state and the next observations. We solve the intrinsic intractability of this solution by generating observations that represent scene realizations, i.e. combination samples of object hypothesis provided by the object detector, while modeling the state by means of a set of constantly resampled particles. Experiments performed on different real world, challenging datasets confirm the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "be5e1336187b80bc418b2eb83601fbd4",
"text": "Pedestrian detection has been an important problem for decades, given its relevance to a number of applications in robotics, including driver assistance systems, road scene understanding and surveillance systems. The two main practical requirements for fielding such systems are very high accuracy and real-time speed: we need pedestrian detectors that are accurate enough to be relied on and are fast enough to run on systems with limited compute power. This paper addresses both of these requirements by combining very accurate deep-learning-based classifiers within very efficient cascade classifier frameworks. Deep neural networks (DNN) have been shown to excel at classification tasks [5], and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second (FPS). The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is the first work we are aware of that achieves high accuracy while running in real-time. To achieve this, we combine a fast cascade [2] with a cascade of classifiers, which we propose to be DNNs. Our approach is unique, as it is the only one to produce a pedestrian detector at real-time speeds (15 FPS) that is also very accurate. Figure 1 visualizes existing methods as plotted on the accuracy computational time axis, measured on the challenging Caltech pedestrian detection benchmark [4]. As can be seen in this figure, our approach is the only one to reside in the high accuracy, high speed region of space, which makes it particularly appealing for practical applications. Fast Deep Network Cascade. Our main architecture is a cascade structure in which we take advantage of the fast features for elimination, VeryFast [2] as an initial stage and combine it with small and large deep networks [1, 5] for high accuracy. The VeryFast algorithm is a cascade itself, but of boosting classifiers. It reduces recall with each stage, producing a high average miss rate in the end. Since the goal is eliminate many non-pedestrian patches and at the same time keep the recall high, we used only 10% of the stages in that cascade. Namely, we use a cascade of only 200 stages, instead of the 2000 in the original work. The first stage of our deep cascade processes all image patches that have high confidence values and pass through the VeryFast classifier. We here utilize the idea of a tiny convolutional network proposed by our prior work [1]. The tiny deep network has three layers only and features a 5x5 convolution, a 1x1 convolution and a very shallow fully-connected layer of 512 units. It reduces the massive computational time that is needed to evaluate a full DNN at all candidate locations filtered by the previous stage. The speedup produced by the tiny network, is a crucial component in achieving real-time performance in our fast cascade method. The baseline deep neural network is based on the original deep network of Krizhevsky et al [5]. As mentioned, this network in general is extremely slow to be applied alone. To achieve real-time speeds, we first apply it to only the remaining filtered patches from the previous two stages. Another key difference is that we reduced the depths of some of the convolutional layers and the sizes of the receptive fields, which is specifically done to gain speed advantage. Runtime. Our deep cascade works at 67ms on a standard NVIDIA K20 Tesla GPU per 640x480 image, which is a runtime of 15 FPS. The time breakdown is as follows. The soft-cascade takes about 7 milliseconds (ms). About 1400 patches are passed through per image from the fast cascade. The tiny DNN runs at 0.67 ms per batch of 128, so it can process the patches in 7.3 ms. The final stage of the cascade (which is the baseline classifier) takes about 53ms. This is an overall runtime of 67ms. Experimental evaluation. We evaluate the performance of the Fast Deep Network Cascade using the training and test protocols established in the Caltech pedestrian benchmark [4]. We tested several scenarios by training on the Caltech data only, denoted as DeepCascade, on an indeFigure 1: Performance of pedestrian detection methods on the accuracy vs speed axis. Our DeepCascade method achieves both smaller missrates and real-time speeds. Methods for which the runtime is more than 5 seconds per image, or is unknown, are plotted on the left hand side. The SpatialPooling+/Katamari methods use additional motion information.",
"title": ""
},
{
"docid": "562ec4c39f0d059fbb9159ecdecd0358",
"text": "In this paper, we propose the factorized hidden layer FHL approach to adapt the deep neural network DNN acoustic models for automatic speech recognition ASR. FHL aims at modeling speaker dependent SD hidden layers by representing an SD affine transformation as a linear combination of bases. The combination weights are low-dimensional speaker parameters that can be initialized using speaker representations like i-vectors and then reliably refined in an unsupervised adaptation fashion. Therefore, our method provides an efficient way to perform both adaptive training and test-time adaptation. Experimental results have shown that the FHL adaptation improves the ASR performance significantly, compared to the standard DNN models, as well as other state-of-the-art DNN adaptation approaches, such as training with the speaker-normalized CMLLR features, speaker-aware training using i-vector and learning hidden unit contributions LHUC. For Aurora 4, FHL achieves 3.8% and 2.3% absolute improvements over the standard DNNs trained on the LDA + STC and CMLLR features, respectively. It also achieves 1.7% absolute performance improvement over a system that combines the i-vector adaptive training with LHUC adaptation. For the AMI dataset, FHL achieved 1.4% and 1.9% absolute improvements over the sequence-trained CMLLR baseline systems, for the IHM and SDM tasks, respectively.",
"title": ""
},
{
"docid": "82e6da590f8f836c9a06c26ef4440005",
"text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.",
"title": ""
},
{
"docid": "c6ab3d07e068637082b88160ca2f4988",
"text": "This paper focuses on the design of a real-time particle-swarm-optimization-based proportional-integral-differential (PSO-PID) control scheme for the levitated balancing and propulsive positioning of a magnetic-levitation (maglev) transportation system. The dynamic model of a maglev transportation system, including levitated electromagnets and a propulsive linear induction motor based on the concepts of mechanical geometry and motion dynamics, is first constructed. The control objective is to design a real-time PID control methodology via PSO gain selections and to directly ensure the stability of the controlled system without the requirement of strict constraints, detailed system information, and auxiliary compensated controllers despite the existence of uncertainties. The effectiveness of the proposed PSO-PID control scheme for the maglev transportation system is verified by numerical simulations and experimental results, and its superiority is indicated in comparison with PSO-PID in previous literature and conventional sliding-mode (SM) control strategies. With the proposed PSO-PID control scheme, the controlled maglev transportation system possesses the advantages of favorable control performance without chattering phenomena in SM control and robustness to uncertainties superior to fixed-gain PSO-PID control.",
"title": ""
},
{
"docid": "9feaf3594f864924e32e0bf9aa51ffd3",
"text": "Ab.wacrThis paper presents ABROAD, an adaptive medium access control (MAC) protocol for reliable broadcast packet transmission in wireless networks. ABROAD incorporates a collision-avoidance handshake within each slot of a synchronous transmission schedule, allowing nodes to reclaim and/or rewe idle slots while maintaining bounded ac. cess delay. Thus, ABROAD provides worst-case performance guarantees while remaining adaptive to local changes in traffic load and node eonnectivity. We analyze the optimal worst-case performance of ABROAD, and show that there is a strict increase in the number of broadcast pack. cts per second ovcr a pure time division multiple access (TDMA) protocol. Extensive simulation confirms our analysis, and also demonslrates that ABROAD aulperfoms broadcast protocols based an reliable unicast packet delivery schemes, such as the IEEE 802.11 MAC standard.",
"title": ""
}
] |
scidocsrr
|
f6a2d39376c399ca6603cef87034eb89
|
Digital Advertising Traffic Operation: Machine Learning for Process Discovery
|
[
{
"docid": "2c92948916257d9b164e7d65aa232d3e",
"text": "Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we propose a technique for rediscovering workflow models. This technique uses workflow logs to discover the workflow process as it is actually being executed. The workflow log contains information about events taking place. We assume that these events are totally ordered and each event refers to one task being executed for a single case. This information can easily be extracted from transactional information systems (e.g., Enterprise Resource Planning systems such as SAP and Baan). The rediscovering technique proposed in this paper can deal with noise and can also be used to validate workflow processes by uncovering and measuring the discrepancies between prescriptive models and actual process executions.",
"title": ""
}
] |
[
{
"docid": "b9b2db174b8fa77516f1c03186a993da",
"text": "The cutting stock problem it is of great interest in relation with several real world problems. Basically it means that there are some smaller pieces that have to be cut from a greater stock piece, in such a way, that the remaining part of the stock piece should be minimal. The classical solution methods of this problem generally need a great amount of calculation. In order to reduce the computational load they use heuristics. A newer solution method is presented in this paper, which is based on a genetic technique. This method uses a tree representation of the cutting pattern, and combines different patterns in order to achive patterns with higher performance. The combination of the cutting patterns is realized by a combined crossover mutation operator. An application of the proposed method is presented briefly in the end of the paper.",
"title": ""
},
{
"docid": "f0c25bb609bc6946b558bcd0ccdaee22",
"text": "A biologically motivated computational model of bottom-up visual selective attention was used to examine the degree to which stimulus salience guides the allocation of attention. Human eye movements were recorded while participants viewed a series of digitized images of complex natural and artificial scenes. Stimulus dependence of attention, as measured by the correlation between computed stimulus salience and fixation locations, was found to be significantly greater than that expected by chance alone and furthermore was greatest for eye movements that immediately follow stimulus onset. The ability to guide attention of three modeled stimulus features (color, intensity and orientation) was examined and found to vary with image type. Additionally, the effect of the drop in visual sensitivity as a function of eccentricity on stimulus salience was examined, modeled, and shown to be an important determiner of attentional allocation. Overall, the results indicate that stimulus-driven, bottom-up mechanisms contribute significantly to attentional guidance under natural viewing conditions.",
"title": ""
},
{
"docid": "b206560e0c9f3e59c8b9a8bec6f12462",
"text": "A symmetrical microstrip directional coupler design using the synthesis technique without prior knowledge of the physical geometry of the directional coupler is analytically given. The introduced design method requires only the information of the port impedances, the coupling level, and the operational frequency. The analytical results are first validated by using a planar electromagnetic simulation tool and then experimentally verified. The error between the experimental and analytical results is found to be within 3% for the worst case. The design charts that give all the physical dimensions, including the length of the directional coupler versus frequency and different coupling levels, are given for alumina, Teflon, RO4003, FR4, and RF-60, which are widely used in microwave applications. The complete design of symmetrical two-line microstrip directional couplers can be obtained for the first time using our results in this paper.",
"title": ""
},
{
"docid": "ff9ca485a07dca02434396eca0f0c94f",
"text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.",
"title": ""
},
{
"docid": "97a1453d230df4f8c57eed1d3a1aaa19",
"text": "In this letter, an isolation improvement method between two closely packed planar inverted-F antennas (PIFAs) is proposed via a miniaturized ground slot with a chip capacitor. The proposed T-shaped ground slot acts as a notch filter, and the capacitor is utilized to reduce the slot length. The equivalent circuit model of the proposed slot with the capacitor is derived. The measured isolation between two PIFAs is down to below -20 dB at the whole WLAN band of 2.4 GHz.",
"title": ""
},
{
"docid": "470810494ae81cc2361380c42116c8d7",
"text": "Sustainability is significantly important for fashion business due to consumers’ increasing awareness of environment. When a fashion company aims to promote sustainability, the main linkage is to develop a sustainable supply chain. This paper contributes to current knowledge of sustainable supply chain in the textile and clothing industry. We first depict the structure of sustainable fashion supply chain including eco-material preparation, sustainable manufacturing, green distribution, green retailing, and ethical consumers based on the extant literature. We study the case of the Swedish fast fashion company, H&M, which has constructed its sustainable supply chain in developing eco-materials, providing safety training, monitoring sustainable manufacturing, reducing carbon emission in distribution, and promoting eco-fashion. Moreover, based on the secondary data and analysis, we learn the lessons of H&M’s sustainable fashion supply chain from the country perspective: (1) the H&M’s sourcing managers may be more likely to select suppliers in the countries with lower degrees of human wellbeing; (2) the H&M’s supply chain manager may set a higher level of inventory in a country with a higher human wellbeing; and (3) the H&M CEO may consider the degrees of human wellbeing and economic wellbeing, instead of environmental wellbeing when launching the online shopping channel in a specific country.",
"title": ""
},
{
"docid": "e1066f3b7ff82667dbc7186f357dd406",
"text": "Generative adversarial networks (GANs) are becoming increasingly popular for image processing tasks. Researchers have started using GAN s for speech enhancement, but the advantage of using the GAN framework has not been established for speech enhancement. For example, a recent study reports encouraging enhancement results, but we find that the architecture of the generator used in the GAN gives better performance when it is trained alone using the $L_1$ loss. This work presents a new GAN for speech enhancement, and obtains performance improvement with the help of adversarial training. A deep neural network (DNN) is used for time-frequency mask estimation, and it is trained in two ways: regular training with the $L_1$ loss and training using the GAN framework with the help of an adversary discriminator. Experimental results suggest that the GAN framework improves speech enhancement performance. Further exploration of loss functions, for speech enhancement, suggests that the $L_1$ loss is consistently better than the $L_2$ loss for improving the perceptual quality of noisy speech.",
"title": ""
},
{
"docid": "f1d1a73f21dcd1d27da4e9d4a93c5581",
"text": "Movements of interfaces can be analysed in terms of whether they are sensible, sensable and desirable. Sensible movements are those that users naturally perform; sensable are those that can be measured by a computer; and desirable movements are those that are required by a given application. We show how a systematic comparison of sensible, sensable and desirable movements, especially with regard to how they do not precisely overlap, can reveal potential problems with an interface and also inspire new features. We describe how this approach has been applied to the design of three interfaces: the Augurscope II, a mobile augmented reality interface for outdoors; the Drift Table, an item of furniture that uses load sensing to control the display of aerial photographs; and pointing flashlights at walls and posters in order to play sounds.",
"title": ""
},
{
"docid": "1b7efa9ffda9aa23187ae7028ea5d966",
"text": "Tools for clinical assessment and escalation of observation and treatment are insufficiently established in the newborn population. We aimed to provide an overview over early warning- and track and trigger systems for newborn infants and performed a nonsystematic review based on a search in Medline and Cinahl until November 2015. Search terms included 'infant, newborn', 'early warning score', and 'track and trigger'. Experts in the field were contacted for identification of unpublished systems. Outcome measures included reference values for physiological parameters including respiratory rate and heart rate, and ways of quantifying the extent of deviations from the reference. Only four neonatal early warning scores were published in full detail, and one system for infants with cardiac disease was considered as having a more general applicability. Temperature, respiratory rate, heart rate, SpO2, capillary refill time, and level of consciousness were parameters commonly included, but the definition and quantification of 'abnormal' varied slightly. The available scoring systems were designed for term and near-term infants in postpartum wards, not neonatal intensive care units. In conclusion, there is a limited availability of neonatal early warning scores. Scoring systems for high-risk neonates in neonatal intensive care units and preterm infants were not identified.",
"title": ""
},
{
"docid": "d3834e337ca661d3919674a8acc1fa0c",
"text": "Relative (or receiver) operating characteristic (ROC) curves are a graphical representation of the relationship between sensitivity and specificity of a laboratory test over all possible diagnostic cutoff values. Laboratory medicine has been slow to adopt the use of ROC curves for the analysis of diagnostic test performance. In this tutorial, we discuss the advantages and limitations of the ROC curve for clinical decision making in laboratory medicine. We demonstrate the construction and statistical uses of ROC analysis, review its published applications in clinical pathology, and comment on its role in the decision analytic framework in laboratory medicine.",
"title": ""
},
{
"docid": "049c9e3abf58bfd504fa0645bb4d1fdc",
"text": "The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.",
"title": ""
},
{
"docid": "74a9612c1ca90a9d7b6152d19af53d29",
"text": "Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.",
"title": ""
},
{
"docid": "45be2fbf427a3ea954a61cfd5150db90",
"text": "Linguistic style conveys the social context in which communication occurs and defines particular ways of using language to engage with the audiences to which the text is accessible. In this work, we are interested in the task of stylistic transfer in natural language generation (NLG) systems, which could have applications in the dissemination of knowledge across styles, automatic summarization and author obfuscation. The main challenges in this task involve the lack of parallel training data and the difficulty in using stylistic features to control generation. To address these challenges, we plan to investigate neural network approaches to NLG to automatically learn and incorporate stylistic features in the process of language generation. We identify several evaluation criteria, and propose manual and automatic evaluation approaches.",
"title": ""
},
{
"docid": "2da6c199c7561855fde9be6f4798a4af",
"text": "Ontogenetic development of the digestive system in golden pompano (Trachinotus ovatus, Linnaeus 1758) larvae was histologically and enzymatically studied from hatch to 32 day post-hatch (DPH). The development of digestive system in golden pompano can be divided into three phases: phase I starting from hatching and ending at the onset of exogenous feeding; phase II starting from first feeding (3 DPH) and finishing at the formation of gastric glands; and phase III starting from the appearance of gastric glands on 15 DPH and continuing onward. The specific activities of trypsin, amylase, and lipase increased sharply from the onset of first feeding to 5–7 DPH, followed by irregular fluctuations. Toward the end of this study, the specific activities of trypsin and amylase showed a declining trend, while the lipase activity remained at similar levels as it was at 5 DPH. The specific activity of pepsin was first detected on 15 DPH and increased with fish age. The dynamics of digestive enzymes corresponded to the structural development of the digestive system. The enzyme activities tend to be stable after the formation of the gastric glands in fish stomach on 15 DPH. The composition of digestive enzymes in larval pompano indicates that fish are able to digest protein, lipid and carbohydrate at early developmental stages. Weaning of larval pompano is recommended from 15 DPH onward. Results of the present study lead to a better understanding of the ontogeny of golden pompano during the larval stage and provide a guide to feeding and weaning of this economically important fish in hatcheries.",
"title": ""
},
{
"docid": "9daa362cc15e988abdc117786b000741",
"text": "The objective of this paper is to develop the hybrid neural network models for bankruptcy prediction. The proposed hybrid neural network models are (1) a MDA-assisted neural network, (2) an ID3-assisted neural network, and (3) a SOFM(self organizing feature map)-assisted neural network. Both the MDA-assisted neural network and the ID3-assisted neural network are the neural network models operating with the input variables selected by the MDA method and ID3 respectively. The SOFM-assisted neural network combines a backpropagation model (supervised learning) with a SOFM model (unsupervised learning). The performance of the hybrid neural network model is evaluated using MDA and ID3 as a benchmark. Empirical results using Korean bankruptcy data show that hybrid neural network models are very promising neural network models for bankruptcy prediction in terms of predictive accuracy and adaptability.",
"title": ""
},
{
"docid": "39188ae46f22dd183f356ba78528b720",
"text": "Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this paper we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalisation, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyse the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better capitalised banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tiering) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out.",
"title": ""
},
{
"docid": "ca62a58ac39d0c2daaa573dcb91cd2e0",
"text": "Blast-related head injuries are one of the most prevalent injuries among military personnel deployed in service of Operation Iraqi Freedom. Although several studies have evaluated symptoms after blast injury in military personnel, few studies compared them to nonblast injuries or measured symptoms within the acute stage after traumatic brain injury (TBI). Knowledge of acute symptoms will help deployed clinicians make important decisions regarding recommendations for treatment and return to duty. Furthermore, differences more apparent during the acute stage might suggest important predictors of the long-term trajectory of recovery. This study evaluated concussive, psychological, and cognitive symptoms in military personnel and civilian contractors (N = 82) diagnosed with mild TBI (mTBI) at a combat support hospital in Iraq. Participants completed a clinical interview, the Automated Neuropsychological Assessment Metric (ANAM), PTSD Checklist-Military Version (PCL-M), Behavioral Health Measure (BHM), and Insomnia Severity Index (ISI) within 72 hr of injury. Results suggest that there are few differences in concussive symptoms, psychological symptoms, and neurocognitive performance between blast and nonblast mTBIs, although clinically significant impairment in cognitive reaction time for both blast and nonblast groups is observed. Reductions in ANAM accuracy were related to duration of loss of consciousness, not injury mechanism.",
"title": ""
},
{
"docid": "fd208ec9a2d74306495ac8c6d454bfd6",
"text": "This qualitative study investigates the perceptions of suburban middle school students’ on academic motivation and student engagement. Ten students, grades 6-8, were randomly selected by the researcher from school counselors’ caseloads and the primary data collection techniques included two types of interviews; individual interviews and focus group interviews. Findings indicate students’ motivation and engagement in middle school is strongly influenced by the social relationships in their lives. The interpersonal factors identified by students were peer influence, teacher support and teacher characteristics, and parental behaviors. Each of these factors consisted of academic and social-emotional support which hindered and/or encouraged motivation and engagement. Students identified socializing with their friends as a means to want to be in school and to engage in learning. Also, students are more engaged and motivated if they believe their teachers care about their academic success and value their job. Lastly, parental involvement in academics appeared to be more crucial for younger students than older students in order to encourage motivation and engagement in school. MIDDLE SCHOOL STUDENTS’ PERCEPTIONS 5 Middle School Students’ Perceptions on Student Engagement and Academic Motivation Middle School Students’ Perceptions on Student Engagement and Academic Motivation Early adolescence marks a time for change for students academically and socially. Students are challenged academically in the sense that there is greater emphasis on developing specific intellectual and cognitive capabilities in school, while at the same time they are attempting to develop social skills and meaningful relationships. It is often easy to overlook the social and interpersonal challenges faced by students in the classroom when there is a large focus on grades in education, especially since teachers’ competencies are often assessed on their students’ academic performance. When schools do not consider psychosocial needs of students, there is a decrease in academic motivation and interest, lower levels of student engagement and poorer academic performance (i.e. grades) for middle school students (Wang & Eccles, 2013). In fact, students who report high levels of engagement in school are 75% more likely to have higher grades and higher attendance rates. Disengaged students tend to have lower grades and are more likely to drop out of school (Klem & Connell, 2004). Therefore, this research has focused on understanding the connections between certain interpersonal influences and academic motivation and engagement.",
"title": ""
},
{
"docid": "d4bd583808c9e105264c001cbcb6b4b0",
"text": "It is common for clinicians, researchers, and public policymakers to describe certain drugs or objects (e.g., games of chance) as “addictive,” tacitly implying that the cause of addiction resides in the properties of drugs or other objects. Conventional wisdom encourages this view by treating different excessive behaviors, such as alcohol dependence and pathological gambling, as distinct disorders. Evidence supporting a broader conceptualization of addiction is emerging. For example, neurobiological research suggests that addictive disorders might not be independent:2 each outwardly unique addiction disorder might be a distinctive expression of the same underlying addiction syndrome. Recent research pertaining to excessive eating, gambling, sexual behaviors, and shopping also suggests that the existing focus on addictive substances does not adequately capture the origin, nature, and processes of addiction. The current view of separate addictions is similar to the view espoused during the early days of AIDS diagnosis, when rare diseases were not",
"title": ""
},
{
"docid": "a9a22c9c57e9ba8c3deefbea689258d5",
"text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.",
"title": ""
}
] |
scidocsrr
|
e4003e7c2bc849b3b3a60c67834e7a31
|
The affective shift model of work engagement.
|
[
{
"docid": "cfddb85a8c81cb5e370fe016ea8d4c5b",
"text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.",
"title": ""
},
{
"docid": "b89099e9b01a83368a1ebdb2f4394eba",
"text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.",
"title": ""
}
] |
[
{
"docid": "e82459841d697a538f3ab77817ed45e7",
"text": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.",
"title": ""
},
{
"docid": "0e2d5444d16f7c710039f6145473131c",
"text": "In this paper, a novel design approach for the development of robot hands is presented. This approach, that can be considered alternative to the “classical” one, takes into consideration compliant structures instead of rigid ones. Compliance effects, which were considered in the past as a “defect” to be mechanically eliminated, can be viceversa regarded as desired features and can be properly controlled in order to achieve desired properties from the robotic device. In particular, this is true for robot hands, where the mechanical complexity of “classical” design solutions has always originated complicated structures, often with low reliability and high costs. In this paper, an alternative solution to the design of dexterous robot hand is illustrated, considering a “mechatronic approach” for the integration of the mechanical structure, the sensory and electronic system, the control and the actuation part. Moreover, the preliminary experimental activity on a first prototype is reported and discussed. The results obtained so far, considering also reliability, costs and development time, are very encouraging, and allows to foresee a wider diffusion of dextrous hands for robotic applications.",
"title": ""
},
{
"docid": "cc17ac1e38c98d3066cc63b15b931726",
"text": "We present BPMN Miner 2.0: a tool that extracts hierarchical and block-structured BPMN process models from event logs. Given an event log in XES format, the tool partitions it into sub-logs (one per subprocess) and discovers a BPMN process model from each sub-log using existing techniques for discovering BPMN process models via heuristics nets or Petri nets. A drawback of these techniques is that they often produce spaghetti-like models and in some cases unsound models. Accordingly, BPMN Miner 2.0 applies post-processing steps to remove unsound constructions as well as a technique to block-structrure the resulting process models in a behavior-preserving manner. The tool is available as a standalone Java tool as well as a ProM and an Apromore plugin. The target audience of this demonstration includes process mining researchers as well as practitioners interested in exploring the potential of process mining using BPMN.",
"title": ""
},
{
"docid": "a4f074b8e6b6c826e14b8f245a63b227",
"text": "The high natural abundance of silicon, together with its excellent reliability and good efficiency in solar cells, suggest its continued use in production of solar energy, on massive scales, for the foreseeable future. Although organics, nanocrystals, nanowires and other new materials hold significant promise, many opportunities continue to exist for research into unconventional means of exploiting silicon in advanced photovoltaic systems. Here, we describe modules that use large-scale arrays of silicon solar microcells created from bulk wafers and integrated in diverse spatial layouts on foreign substrates by transfer printing. The resulting devices can offer useful features, including high degrees of mechanical flexibility, user-definable transparency and ultrathin-form-factor microconcentrator designs. Detailed studies of the processes for creating and manipulating such microcells, together with theoretical and experimental investigations of the electrical, mechanical and optical characteristics of several types of module that incorporate them, illuminate the key aspects.",
"title": ""
},
{
"docid": "223505549222e4b6e7e46d21e67b5ab2",
"text": "We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the best performance in terms of held out perplexity. We also propose a generalization to existing continuous stack models (Joulin & Mikolov, 2015; Grefenstette et al., 2015) to allow a variable number of pop operations more naturally that further improves performance. We further evaluate these language models in terms of their ability to capture non-local syntactic dependencies on a subject-verb agreement dataset (Linzen et al., 2016) and establish new state of the art results using memory augmented language models. Our results demonstrate the value of stack-structured memory for explaining the distribution of words in natural language, in line with linguistic theories claiming a context-free backbone for natural language.",
"title": ""
},
{
"docid": "22f49f2d6e3021516d93d9a96c408dbb",
"text": "This paper presents Flower menu, a new type of Marking menu that does not only support straight, but also curved gestures for any of the 8 usual orientations. Flower menus make it possible to put many commands at each menu level and thus to create as large a hierarchy as needed for common applications. Indeed our informal analysis of menu breadth in popular applications shows that a quarter of them have more than 16 items. Flower menus can easily contain 20 items and even more (theoretical maximum of 56 items). Flower menus also support within groups as well as hierarchical groups. They can thus favor breadth organization (within groups) or depth organization (hierarchical groups): as a result, the designers can lay out items in a very flexible way in order to reveal meaningful item groupings. We also investigate the learning performance of the expert mode of Flower menus. A user experiment is presented that compares linear menus (baseline condition), Flower menus and Polygon menus, a variant of Marking menus that supports a breadth of 16 items. Our experiment shows that Flower menus are more efficient than both Polygon and Linear menus for memorizing command activation in expert mode.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "e31901738e78728a7376457f7d1acd26",
"text": "Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.",
"title": ""
},
{
"docid": "597a3b52fd5114228d74398756d3359f",
"text": "The authors report a meta-analysis of individual differences in detecting deception, confining attention to occasions when people judge strangers' veracity in real-time with no special aids. The authors have developed a statistical technique to correct nominal individual differences for differences introduced by random measurement error. Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others' statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar's credibility than any other individual difference.",
"title": ""
},
{
"docid": "cc8adbaf01e3ab61546fd875724ac270",
"text": "This paper presents the image information mining based on a communication channel concept. The feature extraction algorithms encode the image, while an analysis of topic discovery will decode and send its content to the user in the shape of a semantic map. We consider this approach for a real meaning based semantic annotation of very high resolution remote sensing images. The scene content is described using a multi-level hierarchical information representation. Feature hierarchies are discovered considering that higher levels are formed by combining features from lower level. Such a level to level mapping defines our methodology as a deep learning process. The whole analysis can be divided in two major learning steps. The first one regards the Bayesian inference to extract objects and assign basic semantic to the image. The second step models the spatial interactions between the scene objects based on Latent Dirichlet Allocation, performing a high level semantic annotation. We used a WorldView2 image to exemplify the processing results.",
"title": ""
},
{
"docid": "584de328ade02c34e36e2006f3e66332",
"text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.",
"title": ""
},
{
"docid": "41cfe93db7c4635e106a1d620ea31036",
"text": "Neuroblastoma (NBL) and medulloblastoma (MBL) are tumors of the neuroectoderm that occur in children. NBL and MBL express Trk family tyrosine kinase receptors, which regulate growth, differentiation, and cell death. CEP-751 (KT-6587), an indolocarbazole derivative, is an inhibitor of Trk family tyrosine kinases at nanomolar concentrations. This study was designed to determine the effect of CEP-751 on the growth of NBL and MBL cell lines as xenografts. In vivo studies were conducted on four NBL cell lines (IMR-5, CHP-134, NBL-S, and SY5Y) and three MBL cell lines (D283, D341, and DAOY) using two treatment schedules: (a) treatment was started after the tumors were measurable (therapeutic study); or (b) 4-6 days after inoculation, before tumors were palpable (prevention study). CEP-751 was given at 21 mg/kg/dose administered twice a day, 7 days a week; the carrier vehicle was used as a control. In therapeutic studies, a significant difference in tumor size was seen between treated and control animals with IMR-5 on day 8 (P = 0.01), NBL-S on day 17 (P = 0.016), and CHP-134 on day 15 (P = 0.034). CEP-751 also had a significant growth-inhibitory effect on the MBL line D283 (on day 39, P = 0.031). Inhibition of tumor growth of D341 did not reach statistical significance, and no inhibition was apparent with DAOY. In prevention studies, CEP-751 showed a modest growth-inhibitory effect on IMR5 (P = 0.062) and CHP-134 (P = 0.049). Furthermore, inhibition of growth was greater in the SY5Y cell line transfected with TrkB compared with the untransfected parent cell line expressing no detectable TrkB. Terminal deoxynucleotidyl transferase-mediated nick end labeling studies showed CEP-751 induced apoptosis in the treated CHP-134 tumors, whereas no evidence of apoptosis was seen in the control tumors. Finally, there was no apparent toxicity identified in any of the treated mice. These results suggest that CEP-751 may be a useful therapeutic agent for NBL or MBL.",
"title": ""
},
{
"docid": "7c3457a5ca761b501054e76965b41327",
"text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.",
"title": ""
},
{
"docid": "04f4058d37a33245abf8ed9acd0af35d",
"text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.",
"title": ""
},
{
"docid": "497d6e0bf6f582924745c7aa192579e7",
"text": "The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.",
"title": ""
},
{
"docid": "26e423810e3658cc1c2dcbc682c3512c",
"text": "Recent years have witnessed the increasing threat of phishing attacks on mobile platforms. In fact, mobile phishing is more dangerous due to the limitations of mobile phones and mobile user habits. Existing schemes designed for phishing attacks on computers/laptops cannot effectively address phishing attacks on mobile devices. This paper presents MobiFish, a novel automated lightweight anti-phishing scheme for mobile platforms. MobiFish verifies the validity of web pages and applications (Apps) by comparing the actual identity to the identity claimed by the web pages and Apps. MobiFish has been implemented on the Nexus 4 smartphone running the Android 4.2 operating system. We experimentally evaluate the performance of MobiFish with 100 phishing URLs and corresponding legitimate URLs, as well as fake Facebook Apps. The result shows that MobiFish is very effective in detecting phishing attacks on mobile phones.",
"title": ""
},
{
"docid": "476bd671b982450d6d1f6c8d7936bcb5",
"text": "Walter Thiel developed the method that enables preservation of the body with natural colors in 1992. It consists in the application of an intravascular injection formula, and maintaining the corps submerged for a determinate period of time in the immersion solution in the pool. After immersion, it is possible to maintain the corps in a hermetically sealed container, thus avoiding dehydration outside the pool. The aim of this work was to review the Thiel method, searching all scientific articles describing this technique from its development point of view, and application in anatomy and morphology teaching, as well as in clinical and su rgic l practice. Most of these studies were carried out in Europe. We used PubMed, Ebsco and Embase databases with the terms “Thiel cadaver”, “Thiel embalming”, “Thiel embalming method” and we searched for papers that cited Thiel`s work. In comparison with methods commonly used with high concentrations of formaldehyde, this method lacks the emanation of noxious or irritating gases; gives the corps important passive joint mobility without stiffness; maintaining color, flexibility and tissue plasticity at a level e quivalent to that of a living body. Furthermore, it allows vascular repletion at the capillary level. All this makes for great advantage over the f rmalinfixed and fresh material. Its multiple uses are applicable in anatomy teaching and research; teaching for undergraduates (prose ction and dissection) and for training in surgical techniques for graduates and specialists (laparoscopies, arthroscopies, endoscopies).",
"title": ""
},
{
"docid": "2cb78c31d07fc14b6088515a1b3c2b45",
"text": "A dual-band circularly polarized antenna fed by four apertures that covers the bands of GPS (L1, L2, L5), Galileo (E5a, E5b, E1, E2, L1), and GLONASS (L1, L3) is introduced. A lotus-shaped aperture is added to optimize the coupling between the microstrip lines and the rings. Three wideband planar baluns are used to achieve good axial ratio (lower than 2.1 dB in both bands) and VSWR (41.2%). The measured results of the annular-ring microstrip antenna show good performance of a dual-band operation, and they confirm the validity of this design, which meets the requirement of Global Navigation Satellite System (GNSS) applications.",
"title": ""
},
{
"docid": "375ab5445e81c7982802bdb8b9cbd717",
"text": "Advances in healthcare have led to longer life expectancy and an aging population. The cost of caring for the elderly is rising progressively and threatens the economic well-being of many nations around the world. Instead of professional nursing facilities, many elderly people prefer living independently in their own homes. To enable the aging to remain active, this research explores the roles of technology in improving their quality of life while reducing the cost of healthcare to the elderly population. In particular, we propose a multi-agent service framework, called Context-Aware Service Integration System (CASIS), to integrate applications and services. This paper demonstrates several context-aware service scenarios these have been developed on the proposed framework to demonstrate how context technologies and mobile web services can help enhance the quality of care for an elder’s daily",
"title": ""
},
{
"docid": "4446ec55b23ae88192764cffd519afd3",
"text": "We present Inferential Power Analysis (IPA), a new class of attacks based on power analysis. An IPA attack has two stages: a profiling stage and a key extraction stage. In the profiling stage, intratrace differencing, averaging, and other statistical operations are performed on a large number of power traces to learn details of the implementation, leading to the location and identification of key bits. In the key extraction stage, the key is obtained from a very few power traces; we have successfully extracted keys from a single trace. Compared to differential power analysis, IPA has the advantages that the attacker does not need either plaintext or ciphertext, and that, in the key extraction stage, a key can be obtained from a small number of traces.",
"title": ""
}
] |
scidocsrr
|
0d9ef674922ab2d88f63b64737851325
|
Android-based home door locks application via Bluetooth for disabled people
|
[
{
"docid": "c2d7044828960976e148694ffcd8ed76",
"text": "This paper describes the Smart Home System for Disabled People via Bluetooth Wireless. Smart home system for disable people is the system called assistive domestics focuses on making it possible for the disabled to motivate them carry out the daily activity, safe and comfortable. However in our research work, we attempt to design the smart home system including the wireless controller via Bluetooth technology. This software application adapt in mobile phone, PDA, mobile computer (Samsung Galaxy Tab) using android's operating system (OS). This software application will control the electrical appliances switches wirelessly (Bluetooth). Results from this study found that the system was successfully produced where it is able to control any of the wireless switches at a distance of approximately 25 meter radius from the main controller. The system is seen potentially be used in hospitals, home care for the elderly and facilities for disabled users.",
"title": ""
},
{
"docid": "05e4cfafcef5ad060c1f10b9c6ad2bc0",
"text": "Mobile devices have been integrated into our everyday life. Consequently, home automation and security are becoming increasingly prominent features on mobile devices. In this paper, we have developed a security system that interfaces with an Android mobile device. The mobile device and security system communicate via Bluetooth because a short-range-only communications system was desired. The mobile application can be loaded onto any compatible device, and once loaded, interface with the security system. Commands to lock, unlock, or check the status of the door to which the security system is installed can be sent quickly from the mobile device via a simple, easy to use GUI. The security system then acts on these commands, taking the appropriate action and sending a confirmation back to the mobile device. The security system can also tell the user if the door is open. The door also incorporates a traditional lock and key interface in case the user loses the mobile device.",
"title": ""
}
] |
[
{
"docid": "bdffaedb490b6f3e0054b29159b1b3b5",
"text": "We explore our efforts to create a conceptual framework to describe and analyse the challenges around preparing teachers to create, sustain, and educate in a \"community of learners.\" In particular, we offer a new frame for conceptualizing teacher learning and development within communities and contexts. This conception allows us to understand the variety of ways in which teachers respond in the process of learning lo teach in the manner described by the \"Fostering a Community of Learners\" (FCL) programme. The model illustrates the ongoing interaction among individual student and teacher learning, institutional or programme learning, and the characteristics of the policy environment critical to the success of theory-intensive reform efforts such as FCL.",
"title": ""
},
{
"docid": "cf30e30d7683fd2b0dec2bd6cc354620",
"text": "As online courses such as MOOCs become increasingly popular, there has been a dramatic increase for the demand for methods to facilitate this type of organisation. While resources for new courses are often freely available, they are generally not suitably organised into easily manageable units. In this paper, we investigate how state-of-the-art topic segmentation models can be utilised to automatically transform unstructured text into coherent sections, which are suitable for MOOCs content browsing. The suitability of this method with regards to course organisation is confirmed through experiments with a lecture corpus, configured explicitly according to MOOCs settings. Experimental results demonstrate the reliability and scalability of this approach over various academic disciplines. The findings also show that the topic segmentation model which used discourse cues displayed the best results overall.",
"title": ""
},
{
"docid": "141b333f0c7b256be45c478a79e8f8eb",
"text": "Communications regulators over the next decade will spend increasing time on conflicts between the private interests of broadband providers and the public’s interest in a competitive innovation environment centered on the Internet. As the policy questions this conflict raises are basic to communications policy, they are likely to reappear in many different forms. So far, the first major appearance has come in the ‘‘open access’’ (or ‘‘multiple access’’) debate, over the desirability of allowing vertical integration between Internet Service Providers and cable operators. Proponents of open access see it as a structural remedy to guard against an erosion of the ‘‘neutrality’’ of the network as between competing content and applications. Critics, meanwhile, have taken open-access regulation as unnecessary and likely to slow the pace of broadband deployment.",
"title": ""
},
{
"docid": "30aad6adc2bb222f512db0a4e9eeecd3",
"text": "................................................................................................................... III",
"title": ""
},
{
"docid": "6d728174d576ac785ff093f4cdc16e1b",
"text": "The stress-inducible protein heme oxygenase-1 provides protection against oxidative stress. The anti-inflammatory properties of heme oxygenase-1 may serve as a basis for this cytoprotection. We demonstrate here that carbon monoxide, a by-product of heme catabolism by heme oxygenase, mediates potent anti-inflammatory effects. Both in vivo and in vitro, carbon monoxide at low concentrations differentially and selectively inhibited the expression of lipopolysaccharide-induced pro-inflammatory cytokines tumor necrosis factor-α, interleukin-1β, and macrophage inflammatory protein-1β and increased the lipopolysaccharide-induced expression of the anti-inflammatory cytokine interleukin-10. Carbon monoxide mediated these anti-inflammatory effects not through a guanylyl cyclase–cGMP or nitric oxide pathway, but instead through a pathway involving the mitogen-activated protein kinases. These data indicate the possibility that carbon monoxide may have an important protective function in inflammatory disease states and thus has potential therapeutic uses.",
"title": ""
},
{
"docid": "7786fac57e0c1392c6a5101681baecb0",
"text": "We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.",
"title": ""
},
{
"docid": "2515a3ace56b101d03f8c9fed515b7d3",
"text": "Characteristics of knowledge, people engaged in knowledge transfer, and knowledge stickiness: evidence from Chinese R & D team Huang Huan, Ma Yongyuan, Zhang Sheng, Dou Qinchao, Article information: To cite this document: Huang Huan, Ma Yongyuan, Zhang Sheng, Dou Qinchao, \"Characteristics of knowledge, people engaged in knowledge transfer, and knowledge stickiness: evidence from Chinese R & D team\", Journal of Knowledge Management, https:// doi.org/10.1108/JKM-02-2017-0054 Permanent link to this document: https://doi.org/10.1108/JKM-02-2017-0054",
"title": ""
},
{
"docid": "c2869d1324181e08cc80a9ba069dead8",
"text": "Human identifi cation leads to mutual trust that is essential for the proper functioning of society. We have been identifying fellow humans based on their voice, appearance, or gait for thousands of years. However, a systematic and scientifi c basis for human identifi cation started in the nineteenth century when Alphonse Bertillon (Rhodes and Henry 1956 ) introduced the use of a number of anthropomorphic measurements to identify habitual criminals. The Bertillon system was short-lived: soon after its introduction, the distinctiveness of human fi ngerprints was established. Since the early 1900s, fi ngerprints have been an accepted method in forensic investigations to identify suspects and repeat criminals. Now, virtually all law enforcement agencies worldwide use Automatic Fingerprint Identifi cation Systems (AFIS). With growing concerns about terrorist activities, security breaches, and fi nancial fraud, other physiological and behavioral human characteristics have been used for person identifi cation. These distinctive characteristics, or biometric traits, include features such as face, iris, palmprint, and voice. Biometrics (Jain et al. 2006, 2007 ) is now a mature technology that is widely used in a variety of applications ranging from border crossings (e.g., the US-VISIT program) to visiting Walt Disney Parks.",
"title": ""
},
{
"docid": "da17a995148ffcb4e219bb3f56f5ce4a",
"text": "As education communities grow more interested in STEM (science, technology, engineering, and mathematics), schools have integrated more technology and engineering opportunities into their curricula. Makerspaces for all ages have emerged as a way to support STEM learning through creativity, community building, and hands-on learning. However, little research has evaluated the learning that happens in these spaces, especially in young children. One framework that has been used successfully as an evaluative tool in informal and technology-rich learning spaces is Positive Technological Development (PTD). PTD is an educational framework that describes positive behaviors children exhibit while engaging in digital learning experiences. In this exploratory case study, researchers observed children in a makerspace to determine whether the environment (the space and teachers) contributed to children’s Positive Technological Development. N = 20 children and teachers from a Kindergarten classroom were observed over 6 hours as they engaged in makerspace activities. The children’s activity, teacher’s facilitation, and the physical space were evaluated for alignment with the PTD framework. Results reveal that children showed high overall PTD engagement, and that teachers and the space supported children’s learning in complementary aspects of PTD. Recommendations for practitioners hoping to design and implement a young children’s makerspace are discussed.",
"title": ""
},
{
"docid": "81929e053df9e5c8068286020c7f2c96",
"text": "Distance metric learning (DML) is critical for a wide variety of machine learning algorithms and pattern recognition applications. Transfer metric learning (TML) leverages the side information (e.g., similar/dissimilar constraints over pairs of samples) from related domains to help the target metric learning (with limited information). Current TML tools usually assume that different domains exploit the same feature representation, and thus are not applicable to tasks where data are drawn from heterogeneous domains. Heterogeneous transfer learning approaches handle heterogeneous domains by usually learning feature transformations across different domains. The learned transformation can be used to derive a metric, but these approaches are mostly limited by their capability of only handling two domains. This motivates the proposed heterogeneous multi-task metric learning (HMTML) framework for handling multiple domains by combining side information and unlabeled data. Specifically, HMTML learns the metrics for all different domains simultaneously by maximizing their high-order correlation (parameterized by feature covariance of unlabeled data) in a common subspace, which is induced by the transformations derived from the metrics. Extensive experiments on both multi-language text categorization and multi-view social image annotation demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "ac7156831175817cc9c0e81d2f0bb980",
"text": "Social networking sites (SNS) have become a significant component of people’s daily lives and have revolutionized the ways that business is conducted, from product development and marketing to operation and human resource management. However, there have been few systematic studies that ask why people use such systems. To try to determine why, we proposed a model based on uses and gratifications theory. Hypotheses were tested using PLS on data collected from 148 SNS users. We found that user utilitarian (rational and goal-oriented) gratifications of immediate access and coordination, hedonic (pleasure-oriented) gratifications of affection and leisure, and website social presence were positive predictors of SNS usage. While prior research focused on the hedonic use of SNS, we explored the predictive value of utilitarian factors in SNS. Based on these findings, we suggest a need to focus on the SNS functionalities to provide users with both utilitarian and hedonic gratifications, and suggest incorporating appropriate website features to help users evoke a sense of human contact in the SNS context.",
"title": ""
},
{
"docid": "0417b64d8de01f21a795126868b13a6b",
"text": "The recent sediments of Frains Lake, Michigan contain a rich and well preserved association of chrysophycean cysts. Forty one forms are revealed by scanning electron microscopy (SEM) and light microscopy (LM). Taxonomic descriptions. and SEM micrographs are provided for the dominant forms. The three dominant taxa throughout the sediments, Cysta minima, C. modica and C. subbavaricum, do not show significant shifts in proportional abundance associated with European settlement and the onset of cultural eutrophication. However, certain subdominant taxa do show clear trends. Density counts indicate a dramatic decline in cyst concentration (by volume and by dry mass) and a sharp increase in absolute accumulation (net annual influx) following settlement. The Frains Lake profile of chrysophycean cysts is compared to sequences of other North American and European temperate lakes. The utility of chrysophycean cysts as paleoenvironmental indicators is considered on the basis of these results.",
"title": ""
},
{
"docid": "15212b3465c35b12ab25507411853048",
"text": "This paper developed a phosphor layer applied for thin-film flip-chip light-emitting diodes (TFFC-LEDs) to produce uniform phosphor-converted TFFC white LEDs (TFFC-WLEDs) by combination of laser liftoff, secondary transferring, and surface roughening process. The spin-coating method was used for phosphor layer fabrication onto a substrate to form the phosphor permanent substrate. The TFFC-LEDs were then bonded onto the permanent substrate. From the results, the blue TFFC GaN-based LED with roughened u-GaN surface on a glass substrate (TFRG-LED) demonstrated a 54.2% (at 350 mA) enhancement in light output power, compared with a blue flip-chip GaN-on-sapphire based LED. As the TFFC GaN/phosphor-/glass-based white LED with roughened u-GaN surface (TFRG-WLED) was operated at a forward-bias current of 350 mA, the enhancement of luminous flux was increased by 75.5%, compared with a TFFC GaN/phosphor template-/glass-based white LED. The angular correlated color temperature (CCT) deviation of a TFRG-WLED can be reduced to 1279 K in the range from -75° to +75° at 5000-6000 K application. The TFRG-WLED was fabricated on the glass substrate with the roughened u-GaN surface and the structure of phosphor layer closed to the u-GaN. These kinds of FRGB-WLED structure contribute to a better light extraction characteristic and a higher CCT stability.",
"title": ""
},
{
"docid": "e9083ef6da596c8570d2e0373bfd8b31",
"text": "A method to improve blurring in an aerial image formed by a novel optical imaging element consisting of micro mirror array is proposed. This method is based on prior inverse filtering with a point-spread function.",
"title": ""
},
{
"docid": "8f957dab2aa6b186b61bc309f3f2b5c3",
"text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.",
"title": ""
},
{
"docid": "d504185046e8f51c65a25e448598a2b9",
"text": "The improved version of a broadband planar magic-T using microstrip-slotline transitions is presented. The design implements a small microstrip-slotline tee junction with minimum size slotline terminations to reduce radiation loss. A multisection impedance transformation network is used to increase the operating bandwidth and minimize the parasitic coupling around the microstrip-slotline tee junction. As a result, the improved magic-T has greater bandwidth and lower phase imbalance at the sum and difference ports than the earlier magic-T design. The experimental results show that the 10-GHz magic-T provides more than 70% of 1-dB operating bandwidth with the average in-band insertion loss of less than 0.6 dB. It also has phase and amplitude imbalance of less than plusmn1deg and plusmn0.25 dB, respectively.",
"title": ""
},
{
"docid": "b1422b2646f02a5a84a6a4b13f5ae7d8",
"text": "Two experiments examined the influence of timbre on auditory stream segregation. In experiment 1, listeners heard sequences of orchestral tones equated for pitch and loudness, and they rated how strongly the instruments segregated. Multidimensional scaling analyses of these ratings revealed that segregation was based on the static and dynamic acoustic attributes that influenced similarity judgements in a previous experiment (P Iverson & CL Krumhansl, 1993). In Experiment 2, listeners heard interleaved melodies and tried to recognize the melodies played by a target timbre. The results extended the findings of Experiment 1 to tones varying pitch. Auditory stream segregation appears to be influenced by gross differences in static spectra and by dynamic attributes, including attack duration and spectral flux. These findings support a gestalt explanation of stream segregation and provide evidence against peripheral channel model.",
"title": ""
},
{
"docid": "1cc7f97c7195f7f2dc45e07e3a4a8f78",
"text": "Translucent materials are ubiquitous, and simulating their appearance requires accurate physical parameters. However, physically-accurate parameters for scattering materials are difficult to acquire. We introduce an optimization framework for measuring bulk scattering properties of homogeneous materials (phase function, scattering coefficient, and absorption coefficient) that is more accurate, and more applicable to a broad range of materials. The optimization combines stochastic gradient descent with Monte Carlo rendering and a material dictionary to invert the radiative transfer equation. It offers several advantages: (1) it does not require isolating single-scattering events; (2) it allows measuring solids and liquids that are hard to dilute; (3) it returns parameters in physically-meaningful units; and (4) it does not restrict the shape of the phase function using Henyey-Greenstein or any other low-parameter model. We evaluate our approach by creating an acquisition setup that collects images of a material slab under narrow-beam RGB illumination. We validate results by measuring prescribed nano-dispersions and showing that recovered parameters match those predicted by Lorenz-Mie theory. We also provide a table of RGB scattering parameters for some common liquids and solids, which are validated by simulating color images in novel geometric configurations that match the corresponding photographs with less than 5% error.",
"title": ""
},
{
"docid": "445685897a2e7c9c5b44a713690bd0a8",
"text": "Maximum power point tracking (MPPT) is an integral part of a system of energy conversion using photovoltaic (PV) arrays. The power-voltage characteristic of PV arrays operating under partial shading conditions exhibits multiple local maximum power points (LMPPs). In this paper, a new method has been presented to track the global maximum power point (GMPP) of PV. Compared with the past proposed global MPPT techniques, the method proposed in this paper has the advantages of determining whether partial shading is present, calculating the number of peaks on P-V curves, and predicting the locations of GMPP and LMPP. The new method can quickly find GMPP, and avoid much energy loss due to blind scan. The experimental results verify that the proposed method guarantees convergence to the global MPP under partial shading conditions.",
"title": ""
}
] |
scidocsrr
|
ae1922e33729b4bc058b73469ee996ea
|
Systematic design of unitary space-time constellations
|
[
{
"docid": "a2faba3e69563acf9e874bf4c4327b5d",
"text": "We analyze a mobile wireless link comprising M transmitter andN receiver antennas operating in a Rayleigh flat-fading environment. The propagation coef fici nts between every pair of transmitter and receiver antennas are statistically independent and un known; they remain constant for a coherence interval ofT symbol periods, after which they change to new independent v alues which they maintain for anotherT symbol periods, and so on. Computing the link capacity, associated with channel codin g over multiple fading intervals, requires an optimization over the joint density of T M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater t han the length of the coherence interval: the capacity forM > T is equal to the capacity for M = T . Capacity is achieved when the T M transmitted signal matrix is equal to the product of two stat i ically independent matrices: a T T isotropically distributed unitary matrix times a certain T M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity f or many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence i nterval increases, the capacity approaches the capacity obtained as if the receiver knew the propagatio n coefficients. Index Terms —Multi-element antenna arrays, wireless communications, space-time modulation",
"title": ""
},
{
"docid": "2b540b2e48d5c381e233cb71c0cf36fe",
"text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.",
"title": ""
},
{
"docid": "c74b93fff768f024b921fac7f192102d",
"text": "Motivated by information-theoretic considerations, we pr opose a signalling scheme, unitary spacetime modulation, for multiple-antenna communication links. This modulati on s ideally suited for Rayleigh fast-fading environments, since it does not require the rec iv r to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T M space-time signals f `; ` = 1; : : : ; Lg, whereT represents the coherence interval during which the fading i s approximately constant, and M < T is the number of transmitter antennas. The columns of each ` are orthonormal. When the receiver does not know the propagation coefficients, which between pa irs of transmitter and receiver antennas are modeled as statistically independent, this modulation per forms very well either when the SNR is high or whenT M . We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit error probability with maximum likelihood decoding. We demonstrate that two antennas have a 6 dB diversity gain over one antenna at 15 dB SNR. Index Terms —Multi-element antenna arrays, wireless communications, channel coding, fading channels, transmitter and receiver diversity, space-time modu lation",
"title": ""
}
] |
[
{
"docid": "1368ea6ddef1ac1c37261a532d630b7a",
"text": "Synthetic aperture radar automatic target recognition (SAR-ATR) has made great progress in recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However, obtaining the labels of radar images is expensive and time-consuming. In this paper, we present a semi-supervised learning method that is based on the standard deep convolutional generative adversarial networks (DCGANs). We double the discriminator that is used in DCGANs and utilize the two discriminators for joint training. In this process, we introduce a noisy data learning theory to reduce the negative impact of the incorrectly labeled samples on the performance of the networks. We replace the last layer of the classic discriminators with the standard softmax function to output a vector of class probabilities so that we can recognize multiple objects. We subsequently modify the loss function in order to adapt to the revised network structure. In our model, the two discriminators share the same generator, and we take the average value of them when computing the loss function of the generator, which can improve the training stability of DCGANs to some extent. We also utilize images of higher quality from the generated images for training in order to improve the performance of the networks. Our method has achieved state-of-the-art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and we have proved that using the generated images to train the networks can improve the recognition accuracy with a small number of labeled samples.",
"title": ""
},
{
"docid": "37efaf5cbd7fb400b713db6c7c980d76",
"text": "Social media users who post bullying related tweets may later experience regret, potentially causing them to delete their posts. In this paper, we construct a corpus of bullying tweets and periodically check the existence of each tweet in order to infer if and when it becomes deleted. We then conduct exploratory analysis in order to isolate factors associated with deleted posts. Finally, we propose the construction of a regrettable posts predictor to warn users if a tweet might cause regret.",
"title": ""
},
{
"docid": "889cd95ceaeb405dd626a910be100130",
"text": "In this paper, we propose an anomaly-detection approach applied for video surveillance in crowded scenes. This approach is an unsupervised statistical learning framework based on analysis of spatiotemporal video-volume configuration within video cubes. It learns global activity patterns and local salient behavior patterns via clustering and sparse coding, respectively. Upon the composition-pattern dictionary learned from normal behavior, a sparse reconstruction cost criterion is designed to detect anomalies that occur in video both globally and locally. In addition, a multiple scale analysis is employed for obtaining accurate anomaly localization, considering scale variations of abnormal events. This approach is verified on publically available anomaly-detection datasets and compared with other existing work. The experiment results demonstrate that it not only detects various anomalies more efficiently, but also locates anomalous regions more accurately. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "83af9371062e093db6ca7dbfa49a1638",
"text": "Scan-matching is a technique that can be used for building accurate maps and estimating vehicle motion by comparing a sequence of point cloud measurements of the environment taken from a moving sensor. One challenge that arises in mapping applications where the sensor motion is fast relative to the measurement time is that scans become locally distorted and difficult to align. This problem is common when using 3D laser range sensors, which typically require more scanning time than their 2D counterparts. Existing 3D mapping solutions either eliminate sensor motion by taking a “stop-and-scan” approach, or attempt to correct the motion in an open-loop fashion using odometric or inertial sensors. We propose a solution to 3D scan-matching in which a continuous 6DOF sensor trajectory is recovered to correct the point cloud alignments, producing locally accurate maps and allowing for a reliable estimate of the vehicle motion. Our method is applied to data collected from a 3D spinning lidar sensor mounted on a skid-steer loader vehicle to produce quality maps of outdoor scenes and estimates of the vehicle trajectory during the mapping sequences.",
"title": ""
},
{
"docid": "897a6d208785b144b5d59e4f346134cd",
"text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.",
"title": ""
},
{
"docid": "28115d61e528af469220651bcd7d592a",
"text": "There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships, i.e. they are not able to process temporal input sequences of arbitrary length. Fuzzy nite-state automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic nite-state automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree deened by a membership function. Based on previous work on encoding DFAs in discrete-time, second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by measuring string recognition performance of recurrent neural networks which encode large randomly generated FFAs. In particular, we examine how the networks' performance varies as a function of synaptic weight strength.",
"title": ""
},
{
"docid": "743013a67dc32a53dfa5019b0e60f151",
"text": "With the rapid development of airlines, airports today become much busier and more complicated than previous days. During airlines daily operations, assigning the available gates to the arriving aircrafts based on the fixed schedule is a very important issue, which motivates researchers to study and solve Airport Gate Assignment Problems (AGAP) with all kinds of state-of-the-art combinatorial optimization techniques. In this paper, we study the AGAP and propose a novel hybrid mathematical model based on the method of constraint programming and 0 1 mixed-integer programming. With the objective to minimize the number of gate conflicts of any two adjacent aircrafts assigned to the same gate, we build a mathematical model with logical constraints and the binary constraints. For practical considerations, the potential objective of the model is also to minimize the number of gates that airlines must lease or purchase in order to run their business smoothly. We implement the model in the Optimization Programming Language (OPL) and carry out empirical studies with the data obtained from online timetable of Continental Airlines, Houston Gorge Bush Intercontinental Airport IAH, which demonstrate that our model can provide an efficient evaluation criteria for the airline companies to estimate the efficiency of their current gate assignments.",
"title": ""
},
{
"docid": "0abde18bd6199064e16c36c75165c0b6",
"text": "Renewable energy-based off-grid or decentralised electricity supply has traditionally considered a single technology-based limited level of supply to meet the basic needs, without considering reliable energy provision to rural consumers. The purpose of this paper is to propose the best hybrid technology combination for electricity generation from a mix of renewable energy resources to satisfy the electrical needs in a reliable manner of an off-grid remote village, Palari in the state of Chhattisgarh, India. Four renewable resources, namely, small-scale hydropower, solar photovoltaic systems, wind turbines and bio-diesel generators are considered. The paper estimates the residential, institutional, commercial, agricultural and small-scale industrial demand in the pre-HOMER analysis. Using HOMER, the paper identifies the optimal off-grid option and compares this with conventional grid extension. The solution obtained shows that a hybrid combination of renewable energy generators at an off-grid location can be a cost-effective alternative to grid extension and it is sustainable, techno-economically viable and environmentally sound. The paper also presents a post-HOMER analysis and discusses issues that are likely to affect/influence the realisation of the optimal solution. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "56acc9fd9d211a4c644398f40492392d",
"text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and include variety of objects like smart phones, tablets, digital cameras, sensors, etc. Once all these devices are connected with each other, they enable more and more smart processes and services that support our basic needs, economies, environment and health. Such enormous number of devices connected to internet provides many kinds of services and produce huge amount of data and information. Cloud computing is a model for on-demand access to a shared pool of configurable resources (e.g. compute, networks, servers, storage, applications, services, and software) that can be easily provisioned as Infrastructure (IaaS), software and applications (SaaS). Cloud based platforms help to connect to the things (IaaS) around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications (SaaS). Hence, cloud acts as a front end to access Internet of Things. Applications that interact with devices like sensors have special requirements of massive storage to storage big data, huge computation power to enable the real time processing of the data, and high speed network to stream audio or video. In this paper, we describe how Internet of Things and Cloud computing can work together can address the Big Data issues. We also illustrate about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture and Environment monitoring. Finally, we also propose a prototype model for providing sensing as a service on cloud.",
"title": ""
},
{
"docid": "ff93c200156cfe82fbbeccf66055fc54",
"text": "According to the property of wavelet transform and fabric texture's Fourier spectrum, a new method for defect detection was presented. The proposed method is based on wavelet lifting transform with one resolution level. By using restoration scheme of the Fourier transform, the normal fabric textures of smooth sub-image in the spatial domain are removed by detecting the high-energy frequency components of sub-image in the Fourier domain, setting them to zero using frequency-domain filter, and back-transforming to a spatial domain sub-image. Then, the smooth and detail sub-images are segmented into many sub-windows, in which standard deviation are calculated as extracted features. The extracted features are compared with normal sub-window's features to determine whether there exists defect. Experimental results show that this method is validity and feasibility.",
"title": ""
},
{
"docid": "78a29e0e00aa65517a70fc17293e84c4",
"text": "The model parameters of convolutional neural networks (CNNs) are determined by backpropagation (BP). In this work, we propose an interpretable feedforward (FF) design without any BP as a reference. The FF design adopts a data-centric approach. It derives network parameters of the current layer based on data statistics from the output of the previous layer in a one-pass manner. To construct convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. It is a variant of the principal component analysis (PCA) with an added bias vector to annihilate activation’s nonlinearity. Multiple Saab transforms in cascade yield multiple convolutional layers. As to fully-connected (FC) layers, we construct them using a cascade of multi-stage linear least squared regressors (LSRs). The classification and robustness (against adversarial attacks) performances of BPand FF-designed CNNs applied to the MNIST and the CIFAR-10 datasets are compared. Finally, we comment on the relationship between BP and FF designs.",
"title": ""
},
{
"docid": "0837c9af9b69367a5a6e32b2f72cef0a",
"text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.",
"title": ""
},
{
"docid": "43685bd1927f309c8b9a5edf980ab53f",
"text": "In this paper we propose a pipeline for accurate 3D reconstruction from multiple images that deals with some of the possible sources of inaccuracy present in the input data. Namely, we address the problem of inaccurate camera calibration by including a method [1] adjusting the camera parameters in a global structure-and-motion problem which is solved with a depth map representation that is suitable to large scenes. Secondly, we take the triangular mesh and calibration improved by the global method in the first phase to refine the surface both geometrically and radiometrically. Here we propose surface energy which combines photo consistency with contour matching and minimize it with a gradient method. Our main contribution lies in effective computation of the gradient that naturally balances weight between regularizing and data terms by employing scale space approach to find the correct local minimum. The results are demonstrated on standard high-resolution datasets and a complex outdoor scene.",
"title": ""
},
{
"docid": "ac7b607cc261654939868a62822a58eb",
"text": "Interdigitated capacitors (IDC) are extensively used for a variety of chemical and biological sensing applications. Printing and functionalizing these IDC sensors on bendable substrates will lead to new innovations in healthcare and medicine, food safety inspection, environmental monitoring, and public security. The synthesis of an electrically conductive aqueous graphene ink stabilized in deionized water using the polymer Carboxymethyl Cellulose (CMC) is introduced in this paper. CMC is a nontoxic hydrophilic cellulose derivative used in food industry. The water-based graphene ink is then used to fabricate IDC sensors on mechanically flexible polyimide substrates. The capacitance and frequency response of the sensors are analyzed, and the effect of mechanical stress on the electrical properties is examined. Experimental results confirm low thin film resistivity (~6;.6×10-3 Ω-cm) and high capacitance (>100 pF). The printed sensors are then used to measure water content of ethanol solutions to demonstrate the proposed conductive ink and fabrication methodology for creating chemical sensors on thin membranes.",
"title": ""
},
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "cffbb69ca7df3b0762d246ac358d5e5b",
"text": "This paper presents a 22 nm CMOS technology analog front-end (AFE) for biomedical applications. The circuit is designed for low power and small size implementations, especially for battery-powered implantable devices, and is capable of reading out biomedical signals in the range of 0.01 Hz to 300 Hz in frequency, while rejecting power-line frequency of 50/60Hz. It employs Operational Transconductance Amplifiers (OTAs) in an OTA-C structure to realize a notch filter. The OTA designed has a very low transconductance, which is programmable from 1.069 nA/V to 2.114 nA/V. The notch at power-line frequency (50/60 Hz) achieves an attenuation of 20 dB. The power consumption of the entire AFE was found to be 11.34 nW at ±0.95V supply.",
"title": ""
},
{
"docid": "b14b36728c1775a8469bce1c42ce8783",
"text": "Inorganic scintillators are commonly used as sensors for ionizing radiation detectors in a variety of applications, ranging from particle and nuclear physics detectors, medical imaging, nuclear installations radiation control, homeland security, well oil logging and a number of industrial non-destructive investigations. For all these applications, the scintillation light produced by the energy deposited in the scintillator allows the determination of the position, the energy and the time of the event. However, the performance of these detectors is often limited by the amount of light collected on the photodetector. A major limitation comes from the fact that inorganic scintillators are generally characterized by a high refractive index, as a consequence of the required high density to provide the necessary stopping power for ionizing radiation. The index mismatch between the crystal and the surrounding medium (air or optical grease) strongly limits the light extraction efficiency because of total internal reflection (TIR), increasing the travel path and the absorption probability through multiple bouncings of the photons in the crystal. Photonic crystals can overcome this problem and produce a controllable index matching between the crystal and the output medium through an interface made of a thin nano-structured layer of optically-transparent high index material. This review presents a summary of the works aiming at improving the light collection efficiency of scintillators using photonic crystals since this idea was introduced 10 years ago.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "e13d6cd043ea958e9731c99a83b6de18",
"text": "In this article, an overview and an in-depth analysis of the most discussed 5G waveform candidates are presented. In addition to general requirements, the nature of each waveform is revealed including the motivation, the underlying methodology, and the associated advantages and disadvantages. Furthermore, these waveform candidates are categorized and compared both qualitatively and quantitatively. By doing all these, the study in this work offers not only design guidelines but also operational suggestions for the 5G waveform.",
"title": ""
},
{
"docid": "58f2511c7a4d212c2f8dbece784b9dee",
"text": "Blockchain has received great attention in recent years and motivated innovations in different scenarios. However, many vital issues which affect its performance are still open. For example, it is widely convinced that high level of security and scalability and full decentralization are still impossible to achieve simultaneously. In this paper, we propose Bicomp, a bilayer scalable Nakamoto consensus protocol, which is an approach based on high security and pure decentralized Nakamoto consensus, and with a significant improvement on scalability. In Bicomp, two kinds of blocks are generated, i.e., microblocks for concurrent transaction packaging in network, and macroblocks for leadership competition and chain formation. A leader is elected at beginning of each round by using a macroblock header from proof-of-work. An elected leader then receives and packages multiple microblocks mined by different nodes into one macroblock during its tenure, which results in a bilayer block structure. Such design limits a leader’s power and encourages as many nodes as possible to participate in the process of packaging transactions, which promotes the sharding nature of the system. Furthermore, several mechanisms are carefully designed to reduce transaction overlapping and further limit a leader’s power, among which a novel transaction diversity based metric is proposed as the second level criteria besides the longest-chain-first principle on selecting a legitimate chain when fork happens. Security issues and potential attacks to Bicomp are extensively discussed and experiments for evaluation are performed. From the experimental results based on 50 nodes all over the world, Bicomp achieves significant improvement on scalability than that of Bitcoin and Ethereum, while the security and decentralization merits are still preserved.",
"title": ""
}
] |
scidocsrr
|
22e06f0337b65f50ff312f0d8aebcb4c
|
A Review on Internet of Things for Defense and Public Safety
|
[
{
"docid": "ca19a74fde1b9e3a0ab76995de8b0f36",
"text": "Sensors on (or attached to) mobile phones can enable attractive sensing applications in different domains, such as environmental monitoring, social networking, healthcare, transportation, etc. We introduce a new concept, sensing as a service (S2aaS), i.e., providing sensing services using mobile phones via a cloud computing system. An S2aaS cloud needs to meet the following requirements: 1) it must be able to support various mobile phone sensing applications on different smartphone platforms; 2) it must be energy-efficient; and 3) it must have effective incentive mechanisms that can be used to attract mobile users to participate in sensing activities. In this vision paper, we identify unique challenges of designing and implementing an S2aaS cloud, review existing systems and methods, present viable solutions, and point out future research directions.",
"title": ""
}
] |
[
{
"docid": "761ff3bbbb50ae44243f6f6ff60349a0",
"text": "Memristor technology is regarded as a potential solution to the memory bottleneck in Von Neumann Architecture by putting storage and computation integrated in the same physical location. In this paper, we proposed a nonvolatile exclusive-OR (XOR) logic gate with 5 memristors, which can execute operation in a single step. Moreover, based on the XOR logic gate, a full adder was presented and simulated by SPICE. Compared to other logic gate and full adder, the proposed circuits have benefits of simpler architecture, higher speed and lower power consumption. This paper provides a memristor-based element as a solution to the future alternative Computation-In-Memory architecture.",
"title": ""
},
{
"docid": "462e1d091882171cf57ccb247e6fd438",
"text": "Temperature changes have a strong effect on Hemispherical Resonator Gyro (HRG) output; therefore, it is of vital importance to observe their influence and then make necessary compensations. In this paper, a temperature compensation model for HRG based on the natural frequency of the resonator is established and then temperature drift compensations are accomplished. To begin with, a math model of the relationship between the temperature and the natural frequency of HRG is set up. Then, the math model is written into a Taylor expansion expression and the expansion coefficients are calibrated through temperature experiments. The experimental results show that the frequency changes correspond to temperature changes and each temperature only corresponds to one natural frequency, so the output of HRG can be compensated through the natural frequency of the resonator instead of the temperature itself. As a result, compensations are made for the output drift of HRG based on natural frequency through a stepwise linear regression method. The compensation results show that temperature-frequency method is valid and suitable for the gyroscope drift compensation, which would ensure HRG's application in a larger temperature range in the future.",
"title": ""
},
{
"docid": "e2b74db574db8001dace37cbecb8c4eb",
"text": "Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get/put operations.",
"title": ""
},
{
"docid": "42a0a7a43bce26f2a7c6c1320b25a9f2",
"text": "A HPLC method with UV detection at 262nm was developed to analyze inositol hexanicotinate in rat plasma. Plasma samples were extracted with an equal volume of acetonitrile, followed by dilution with mobile phase buffer (5mM phosphate buffer, pH 6.0) to eliminate any solvent effects. Inositol hexanicotinate and the internal standard (mebendazole) were separated isocratically using a mobile phase of acetonitrile/phosphate buffer (35:65, v/v, pH 6.0) at a flow rate of 1.0mL/min and a reverse-phase XTerra MS C(18) column (4.6mmx150mm, 3.5microm). The standard curve was linear over a concentration range of 1.5-100.0microg/mL of inositol hexanicotinate in rat plasma. The HPLC method was validated with intra- and inter-day precisions of 1.55-4.30% and 2.69-21.5%, respectively. The intra- and inter-day biases were -0.75 to 19.8% and 2.58-22.0%, respectively. At plasma concentrations of 1.5-100microg/mL, the mean recovery of inositol hexanicotinate was 99.6%. The results of a stability study indicated that inositol hexanicotinate was unstable in rat plasma samples, but was stable in acetonitrile extracts of rat plasma for up to 24h at 4 degrees C. The assay is simple, rapid, specific, sensitive, and reproducible and has been used successfully to analyze inositol hexanicotinate plasma concentrations in a pharmacokinetic study using the rat as an animal model.",
"title": ""
},
{
"docid": "ef0d1c2904ee9b5ad7310e11831d175f",
"text": "Topic modeling has become a widely used tool for document management due to its superior performance. However, there are few topic models distinguishing the importance of documents on different topics. In this paper, we investigate how to utilize the importance of documents to improve topic modeling and propose to incorporate link based ranking into topic modeling. Specifically, topical pagerank is used to compute the topic level ranking of documents, which indicates the importance of documents on different topics. By retreating the topical ranking of a document as the probability of the document involved in corresponding topic, a generalized relation is built between ranking and topic modeling. Based on the relation, a ranking based topic model Rank Topic is proposed. With Rank Topic, a mutual enhancement framework is established between ranking and topic modeling. Extensive experiments on paper citation data and Twitter data are conducted to compare the performance of Rank Topic with that of some state-of-the-art topic models. Experimental results show that Rank Topic performs much better than some baseline models and is comparable with the state-of-the-art link combined relational topic model (RTM) in generalization performance, document clustering and classification by setting a proper balancing parameter. It is also demonstrated in both quantitative and qualitative ways that topics detected by Rank Topic are more interpretable than those detected by some baseline models and still competitive with RTM.",
"title": ""
},
{
"docid": "06126613f168e84aa64f98030fa2d99a",
"text": "Errors in sample handling or test interpretation may cause false positives in forensic DNA testing. This article uses a Bayesian model to show how the potential for a false positive affects the evidentiary value of DNA evidence and the sufficiency of DNA evidence to meet traditional legal standards for conviction. The Bayesian analysis is contrasted with the \"false positive fallacy,\" an intuitively appealing but erroneous alternative interpretation. The findings show the importance of having accurate information about both the random match probability and the false positive probability when evaluating DNA evidence. It is argued that ignoring or underestimating the potential for a false positive can lead to serious errors of interpretation, particularly when the suspect is identified through a \"DNA dragnet\" or database search, and that ignorance of the true rate of error creates an important element of uncertainty about the value of DNA evidence.",
"title": ""
},
{
"docid": "9c7de005e64ba67981dd7d603b80ee35",
"text": "Streptococcus mitis (S. mitis) and Pseudomonas aeruginosa (P. aeruginosa) are typically found in the upper respiratory tract of infants. We previously found that P. aeruginosa and S. mitis were two of the most common bacteria in biofilms on newborns' endotracheal tubes (ETTs) and in their sputa and that S. mitis was able to produce autoinducer-2 (AI-2), whereas P. aeruginosa was not. Recently, we also found that exogenous AI-2 and S. mitis could influence the behaviors of P. aeruginosa. We hypothesized that S. mitis contributes to this interspecies interaction and that inhibition of AI-2 could result in inhibition of these effects. To test this hypothesis, we selected PAO1 as a representative model strain of P. aeruginosa and evaluated the effect of S. mitis as well as an AI-2 analog (D-ribose) on mono- and co-culture biofilms in both in vitro and in vivo models. In this context, S. mitis promoted PAO1 biofilm formation and pathogenicity. Dual-species (PAO1 and S. mitis) biofilms exhibited higher expression of quorum sensing genes than single-species (PAO1) biofilms did. Additionally, ETTs covered in dual-species biofilms increased the mortality rate and aggravated lung infection compared with ETTs covered in mono-species biofilms in an endotracheal intubation rat model, all of which was inhibited by D-ribose. Our results demonstrated that S. mitis AI-2 plays an important role in interspecies interactions with PAO1 and may be a target for inhibition of biofilm formation and infection in ventilator-associated pneumonia.",
"title": ""
},
{
"docid": "e723f76f4c9b264cbf4361b72c7cbf10",
"text": "With the constant growth in Information and Communication Technology (ICT) in the last 50 years or so, electronic communication has become part of the present day system of living. Equally, smileys or emoticons were innovated in 1982, and today the genre has attained a substantial patronage in various aspects of computer-mediated communication (CMC). Ever since written forms of electronic communication lack the face-to-face (F2F) situation attributes, emoticons are seen as socio-emotional suppliers to the CMC. This article reviews scholarly research in that field in order to compile variety of investigations on the application of emoticons in some facets of CMC, i.e. Facebook, Instant Messaging (IM), and Short Messaging Service (SMS). Key findings of the review show that emoticons do not just serve as paralanguage elements rather they are compared to word morphemes with distinctive significative functions. In other words, they are morpheme-like units and could be derivational, inflectional, or abbreviations but not unbound. The findings also indicate that emoticons could be conventionalized as well as being paralinguistic elements, therefore, they should be approached as contributory to conversation itself not mere compensatory to language.",
"title": ""
},
{
"docid": "694db7f9aa3ba4769a7dfa49b62bdcd8",
"text": "While still relatively “new”, the quantum-dot cellular automata (QCA) appears to be able to provide many of the properties and functionalities that have made CMOS successful over the past several decades. Early experiments have demonstrated and realized most, if not all, of the “fundamentals” needed for a computational circuit – devices, logic gates, wires, etc. This study introduces the beginning of a next step in experimental work: designing a computationally useful – yet simple and fabricatable circuit for QCA. The design target is a QCA Field Programmable Gate",
"title": ""
},
{
"docid": "e75b7c2fcdfc19a650d7da4e6ae643a2",
"text": "With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services.",
"title": ""
},
{
"docid": "60b902dbe2bfe5e2e998da0071f38004",
"text": "Opcode sequences from decompiled executables have been employed to detect malware. Currently, opcode sequences are extracted using text-based methods, and the limitation of this method is that the extracted opcode sequences cannot represent the true behaviors of an executable. To solve this issue, we present a control flow-based method to extract executable opcode behaviors. The behaviors extracted by this method can fully represent the behavior characteristics of an executable. To verify the efficiency of control flow-based behaviors, we perform a comparative study of the two types of opcode behavior analysis methods. The experimental results indicate that the proposed control flow-based method has a higher overall accuracy and a lower false positive rate. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cd68f1e50052709d85cabf55bb1764df",
"text": "Multi-label classification is one of the most challenging tasks in the computer vision community, owing to different composition and interaction (e.g. partial visibility or occlusion) between objects in multi-label images. Intuitively, some objects usually co-occur with some specific scenes, e.g. the sofa often appears in a living room. Therefore, the scene of a given image may provides informative cues for identifying those embedded objects. In this paper, we propose a novel scene-aware deep framework for addressing the challenging multi-label classification task. In particular, we incorporate two sub-networks that are pre-trained for different tasks (i.e. object classification and scene classification) into a unified framework, so that informative scene-aware cues can be leveraged for benefiting multi-label object classification. In addition, we also present a novel one vs. all multiple-cross-entropy (MCE) loss for optimizing the proposed scene-aware deep framework by independently penalizing the classification error for each label. The proposed method can be learned in an end-to-end manner and extensive experimental results on Pascal VOC 2007 and MS COCO demonstrate that our approach is able to make a noticeable improvement for the multi-label classification task.",
"title": ""
},
{
"docid": "de8045598fe808788aca455eee4a1126",
"text": "This paper presents an efficient and practical approach for automatic, unsupervised object detection and segmentation in two-texture images based on the concept of Gabor filter optimization. The entire process occurs within a hierarchical framework and consists of the steps of detection, coarse segmentation, and fine segmentation. In the object detection step, the image is first processed using a Gabor filter bank. Then, the histograms of the filtered responses are analyzed using the scale-space approach to predict the presence/absence of an object in the target image. If the presence of an object is reported, the proposed approach proceeds to the coarse segmentation stage, wherein the best Gabor filter (among the bank of filters) is automatically chosen, and used to segment the image into two distinct regions. Finally, in the fine segmentation step, the coefficients of the best Gabor filter (output from the previous stage) are iteratively refined in order to further fine-tune and improve the segmentation map produced by the coarse segmentation step. In the validation study, the proposed approach is applied as part of a machine vision scheme with the goal of quantifying the stain-release property of fabrics. To that end, the presented hierarchical scheme is used to detect and segment stains on a sizeable set of digitized fabric images, and the performance evaluation of the detection, coarse segmentation, and fine segmentation steps is conducted using appropriate metrics. The promising nature of these results bears testimony to the efficacy of the proposed approach.",
"title": ""
},
{
"docid": "6df61e330f6b71c4ef136e3a2220a5e2",
"text": "In recent years, we have seen significant advancement in technologies to bring about smarter cities worldwide. The interconnectivity of things is the key enabler in these initiatives. An important building block is smart mobility, and it revolves around resolving land transport challenges in cities with dense populations. A transformative direction that global stakeholders are looking into is autonomous vehicles and the transport infrastructure to interconnect them to the traffic management system (that is, vehicle to infrastructure connectivity), as well as to communicate with one another (that is, vehicle to vehicle connectivity) to facilitate better awareness of road conditions. A number of countries had also started to take autonomous vehicles to the roads to conduct trials and are moving towards the plan for larger scale deployment. However, an important consideration in this space is the security of the autonomous vehicles. There has been an increasing interest in the attacks and defences of autonomous vehicles as these vehicles are getting ready to go onto the roads. In this paper, we aim to organize and discuss the various methods of attacking and defending autonomous vehicles, and propose a comprehensive attack and defence taxonomy to better categorize each of them. Through this work, we hope that it provides a better understanding of how targeted defences should be put in place for targeted attacks, and for technologists to be more mindful of the pitfalls when developing architectures, algorithms and protocols, so as to realise a more secure infrastructure composed of dependable autonomous vehicles.",
"title": ""
},
{
"docid": "27b2148c05febeb1051c1d1229a397d6",
"text": "Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.",
"title": ""
},
{
"docid": "7eca894697ee372abe6f67a069dcd910",
"text": "Government agencies and consulting companies in charge of pavement management face the challenge of maintaining pavements in serviceable conditions throughout their life from the functional and structural standpoints. For this, the assessment and prediction of the pavement conditions are crucial. This study proposes a neuro-fuzzy model to predict the performance of flexible pavements using the parameters routinely collected by agencies to characterize the condition of an existing pavement. These parameters are generally obtained by performing falling weight deflectometer tests and monitoring the development of distresses on the pavement surface. The proposed hybrid model for predicting pavement performance was characterized by multilayer, feedforward neural networks that led the reasoning process of the IF-THEN fuzzy rules. The results of the neuro-fuzzy model were superior to those of the linear regression model in terms of accuracy in the approximation. The proposed neuro-fuzzy model showed good generalization capability, and the evaluation of the model performance produced satisfactory results, demonstrating the efficiency and potential of these new mathematical modeling techniques.",
"title": ""
},
{
"docid": "886285e5e732697d956c5c70713c5acb",
"text": "Falls are the leading cause of injury-related morbidity and mortality among older adults. Over 90 % of hip and wrist fractures and 60 % of traumatic brain injuries in older adults are due to falls. Another serious consequence of falls among older adults is the ‘long lie’ experienced by individuals who are unable to get up and remain on the ground for an extended period of time after a fall. Considerable research has been conducted over the past decade on the design of wearable sensor systems that can automatically detect falls and send an alert to care providers to reduce the frequency and severity of long lies. While most systems described to date incorporate threshold-based algorithms, machine learning algorithms may offer increased accuracy in detecting falls. In the current study, we compared the accuracy of these two approaches in detecting falls by conducting a comprehensive set of falling experiments with 10 young participants. Participants wore waist-mounted tri-axial accelerometers and simulated the most common causes of falls observed in older adults, along with near-falls and activities of daily living. The overall performance of five machine learning algorithms was greater than the performance of five threshold-based algorithms described in the literature, with support vector machines providing the highest combination of sensitivity and specificity.",
"title": ""
},
{
"docid": "8f729918f60b6caa1306c632912ba820",
"text": "In the past few years, there has been an increasing availability of technologies for the acquisition of digital 3D models of real objects and the consequent use of these models in a variety of applications, in medicine, engineering, and cultural heritage. In this framework, content-based retrieval of 3D objects is becoming an important subject of research, and finding adequate descriptors to capture global or local characteristics of the shape has become one of the main investigation goals. In this article, we present a comparative analysis of a few different solutions for description and retrieval by similarity of 3D models that are representative of the principal classes of approaches proposed. We have developed an experimental analysis by comparing these methods according to their robustness to deformations, the ability to capture an object's structural complexity, and the resolution at which models are considered.",
"title": ""
},
{
"docid": "ae1b3d2668ed17df54a2cdb758c6b427",
"text": "Word embeddings improve generalization over lexical features by placing each word in a lower-dimensional space, using distributional information obtained from unlabeled data. However, the effectiveness of word embeddings for downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which embeddings do not exist. In this paper, we present MIMICK, an approach to generating OOV word embeddings compositionally, by learning a function from spellings to distributional embeddings. Unlike prior work, MIMICK does not require re-training on the original word embedding corpus; instead, learning is performed at the type level. Intrinsic and extrinsic evaluations demonstrate the power of this simple approach. On 23 languages, MIMICK improves performance over a word-based baseline for tagging part-of-speech and morphosyntactic attributes. It is competitive with (and complementary to) a supervised characterbased model in low-resource settings.",
"title": ""
},
{
"docid": "ac998aea7d2bfdf59efc009c3beec916",
"text": "Through a comprehensive review of the literature on sexual assault, the authors propose to clarify the different stages of the exam and help the practitioner to the forensic interpretation of lesions. The authors describe the basic principles that make consensus in how to interview victims in order to increase the reliability of the information collected. The various medical data that must be collected allowing to guide diagnosis (urogenital symptoms, sexual behaviour disorder) or facilitate the interpretation of lesions (age of puberty, use of tampons…) are specified as well as the different positions of examination and their association to other complementary techniques (Foley catheter, colposcopy, toluidine blue). The authors present a simple decision tree that can help the practitioner to interpret the laceration of the hymen. They detail the description and forensic interpretation of all genital lesions that may be encountered as a result of sexual assault, and the pitfalls to avoid. Finally, two main problems in the interpretation of lesions are described, the absence of injury after penetration and the accidental genital lesions.",
"title": ""
}
] |
scidocsrr
|
6d6864bede5dd11b168e05d3a2f4c50d
|
Customer satisfaction in E-Commerce A case study of China and Bangladesh
|
[
{
"docid": "43bc62e674ae5c8785d00406b307b478",
"text": "We explore the theoretical foundations of value creation in e-business by examining how 59 American and European e-businesses that have recently become publicly traded corporations create value. We observe that in e-business new value can be created by the ways in which transactions are enabled. Grounded in the rich data obtained from case study analyses and in the received theory in entrepreneurship and strategic management, we develop a model of the sources of value creation. The model suggests that the value creation potential of e-businesses hinges on four interdependent dimensions, namely: efficiency, complementarities, lock-in, and novelty. Our findings suggest that no single entrepreneurship or strategic management theory can fully explain the value creation potential of e-business. Rather, an integration of the received theoretical perspectives on value creation is needed. To enable such an integration, we offer the business model construct as a unit of analysis for future research on value creation in e-business. A business model depicts the design of transaction content, structure, and governance so as to create value through the exploitation of business opportunities. We propose that a firm’s business model is an important locus of innovation and a crucial source of value creation for the firm and its suppliers, partners, and customers. Copyright 2001 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "6c2afcf5d7db0f5d6baa9d435c203f8a",
"text": "An attempt to extend current thinking on postpurchase response to include attribute satisfaction and dissatisfaction as separate determinants not fully reflected in either cognitive (i.e.. expectancy disconfirmation) or affective paradigms is presented. In separate studies of automobile satisfaction and satisfaction with course instruction, respondents provided the nature of emotional experience, disconfirmation perceptions, and separate attribute satisfaction and dissatisfaction judgments. Analysis confirmed the disconfirmation effect and tbe effects of separate dimensions of positive and negative affect and also suggested a multidimensional structure to the affect dimensions. Additionally, attribute satisfaction and dissatisfaction were significantly related to positive and negative affect, respectively, and to overall satisfaction. It is suggested that all dimensions tested are needed for a full accounting of postpurchase responses in usage.",
"title": ""
}
] |
[
{
"docid": "d501d2758e600c307e41a329222bf7d6",
"text": "Placebo effects are beneficial effects that are attributable to the brain–mind responses to the context in which a treatment is delivered rather than to the specific actions of the drug. They are mediated by diverse processes — including learning, expectations and social cognition — and can influence various clinical and physiological outcomes related to health. Emerging neuroscience evidence implicates multiple brain systems and neurochemical mediators, including opioids and dopamine. We present an empirical review of the brain systems that are involved in placebo effects, focusing on placebo analgesia, and a conceptual framework linking these findings to the mind–brain processes that mediate them. This framework suggests that the neuropsychological processes that mediate placebo effects may be crucial for a wide array of therapeutic approaches, including many drugs.",
"title": ""
},
{
"docid": "065b06b9a9d85f09c8847bd0ebc3b691",
"text": "The federal tax code allows employers to provide tax-free transit benefits to employees. Although transit benefits programs are commonly promoted as having advantages for transit agencies, such as increasing transit ridership and transit agency revenues, their effects and effectiveness are not well understood and need to be better assessed. This research is designed to help transit agencies, policy-makers, and organizations that promote transit benefits better understand what effects they might expect from a transit benefits program and how to quantify these effects. Overall, the research found that transit benefits programs can be effective for transit agencies attempting to meet various goals, in terms of increasing ridership and revenues, and decreasing costs. However, it is critical to set realistic expectations and conduct valid evaluations to assess these effects. Introduction U.S. tax law allows employers to offer employees tax-free transit benefits (U.S. Department of the Treasury 2004). Regardless of how the benefits are offered (employer-paid, employee-paid, or a combination of the two), both the employer and the employee enjoy tax advantages since neither pays federal payroll or income taxes on the benefit. Although the cost savings from the benefits are relatively Journal of Public Transportation, Vol. 11, No. 2, 2008 2 straightforward, their impacts on transit ridership are not as well understood, and little rigorous research has been conducted on the topic at a national scale. While it makes intuitive sense that transit benefits programs should increase transit use, it is possible that these programs primarily support existing transit riders. To induce employers to offer transit benefits, many transit agencies have established programs that allow employers to purchase various pass types and vouchers at a discount, in bulk, or using other types of incentives. These programs make it easy for employers to offer transit benefits, as well as provide the transit or other sponsoring agency an opportunity to “brand” their program and increase their name recognition. In addition, tax law allows employers to purchase fare media on a cash reimbursement basis if no pass or voucher is available in the region, giving the agency another incentive to create a transit benefits program. This research focuses on how transit benefits programs affect transit agencies in terms of ridership, revenues, and costs. The following questions provide a rough outline of the topics covered in the article: • How much systemwide ridership and revenues come from transit benefits programs? The share of overall ridership and revenues that come from employer programs affects the extent to which these programs can help retain and attract riders and yield cost savings to the transit agency. • Do transit benefits programs increase transit ridership and revenues? Research on the impacts of transit benefits programs on employee travel behavior suggests that such efforts can increase transit ridership. This article explores the extent to which transit ridership and revenues increase, and how program design affects revenues per rider. • How much do transit benefits programs cost to administer? These costs include staff time for employer outreach as well as marketing and other fees. • Are there differences in revenue, ridership, or cost characteristics between different program types? If different types of programs (e.g., universal passes, monthly passes) generate different levels of revenues per rider and have different costs, it is useful for transit agencies to understand these effects so that they can offer the program options that best meet their goals. Impacts of Transit Benefits Programs Data Sources and Approach The results summarized in this section are drawn from interviews that the research team conducted in 200 with representatives from the following seven transit agencies. These agencies were selected to participate because they provide a range of mode options and program types, cover various geographic areas, and have differing ridership levels: • Washington Metropolitan Area Transit Authority (WMATA), Washington, D.C. • Metropolitan Atlanta Rapid Transit Authority (MARTA), Atlanta, Georgia • King County Metro, Seattle, Washington • Regional Transportation District (RTD), Denver, Colorado • Metro Transit, Minneapolis/St. Paul, Minnesota • Santa Clara Valley Transportation Authority (VTA), San Jose, California • Valley Metro, Phoenix, Arizona As the focus of the research was ridership, revenues, and costs to transit agencies, and the differences between different types of pass programs, we studied only agencies that operate their own program pass or voucher programs. A subset of voucher programs are operated by private third-party providers, sometimes as the sole program, sometimes in conjunction with public agency programs. However, the research team chose not to include regions where these were the only programs, as they represent only voucher and not pass programs. The research team conducted the interviews using an interview guide, asking follow-up or clarifying questions when necessary. In some cases, the persons interviewed sent additional information following the interview. Table provides background information on the seven transit agencies. As part of the project, the research team also collected ridership surveys and surveys pertaining to commuter benefits where available. Types of Transit Benefits Programs Of the seven transit agencies interviewed, four had multiple programs. Types of employer programs offered included monthly passes, stored value cards, universal passes, and vouchers (which can be traded in for transit fare media or used on vanpools). Generally these situations have evolved in response to employer demands and available technology. As Table shows, three of the seven agencies have only one employer program, and King County Metro has seven. Journal of Public Transportation, Vol. 11, No. 2, 2008 4 Ta bl e 1. S um m ar y Ch ar ac te ri st ic s of T ra ns it A ge nc ie s an d th ei r Pr og ra m T yp es (2 00 3) Impacts of Transit Benefits Programs Ridership Impacts Among the agencies interviewed, employer programs contributed between and 2 percent of total transit riders, and agencies with trend data available have shown increases in employee participation over time. However, it is difficult to determine if the increases in employee participation have led to increased ridership systemwide; in two cases the answers is a qualified yes, while in two others the effects are unclear. Employee Participation Employees participating in transit benefits programs make up a substantial portion of total transit ridership for many transit agencies. The agencies interviewed estimated that the percentage of all riders using employer transit benefits programs was between and 2 percent. The highest percentages of transit riders who participate in employer-sponsored transit benefits programs were at WMATA, Valley Metro, and King County Metro. WMATA attracts a large number of federal employees who receive full employer-paid benefits. Valley Metro is the smallest of the seven agencies in terms of total systemwide ridership, but has the largest number of staff working in employer outreach (including rideshare programs), so the program’s success may stem in part from this intensive effort. Table 2 provides ridership figures for each program and the percent of total system riders using transit benefits. Employee Participation Trends Employee participation in transit benefits programs has been increasing for nearly all of the agencies that provided historical participation trends. Even where employer participation has declined or remained relatively unchanged, employee participation has consistently increased. Five agencies had trend information on the number of employees participating in transit benefits programs, which is graphed in Figure .2 Three of these are universal pass programs, which track the number of employees at participating employers. While generally not all universal pass recipients ride transit, the figures assume that all of King County’s UPass program employee participants ride transit, since students, faculty, and staff are allowed to opt out of the program. Journal of Public Transportation, Vol. 11, No. 2, 2008 Table 2. Employee Participation in Transit Benefits Programs (as of 2003) Impacts of Transit Benefits Programs Figure 1. Trends in Employee Participation at Five Transit Benefits Programs Most striking is the large jump in participation in WMATA’s transit benefits program from 2000 to 200 . Two factors contributing to this increase were the increase in the tax-free limit from $ to $ 00 and implementation of an Executive Order that requires federal government agencies to fully pay for transit benefits up to the tax-free limit for all interested executive branch employees in the Washington, D.C. region. VTA, MARTA, and RTD have shown much steadier increases in employee participation over time. VTA and MARTA reported being affected by economic downturns, and all three had fare increases (or in the case of MARTA, a reduction in the employer discount that made employers’ costs higher). The strong employee participation figures seem to indicate that the programs are fairly resilient in the face of financial obstacles for employers. Participation in King County’s UPass has been steady, but the program only serves the University of Washington, and so it may have reached its saturation point among potential recipients. Contributions of Transit Benefit Riders to Overall Ridership Growth It is difficult to develop quantitative estimates of the extent to which the transit benefits programs have affected overall transit ridership at agencies over time Journal of Public Transportation, Vol. 11, No. 2, 2008 because it is impossible to state what ridership trends would have been if such programs were not in place. Ho",
"title": ""
},
{
"docid": "8d104169f3862bc7c54d5932024ed9f6",
"text": "Integer optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. For example, an airline may need to determine crew schedules that minimize the total operating cost; an automotive manufacturer may want to determine the optimal mix of models to produce in order to maximize profit; or a flexible manufacturing facility may want to schedule production for a plant without knowing precisely what parts will be needed in future periods. In today’s changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.",
"title": ""
},
{
"docid": "ba2e16103676fa57bc3ca841106d2d32",
"text": "The purpose of this study was to investigate the effect of the ultrasonic cavitation versus low level laser therapy in the treatment of abdominal adiposity in female post gastric bypass. Subjects: Sixty female suffering from localized fat deposits at the abdomen area after gastric bypass were divided randomly and equally into three equal groups Group (1): were received low level laser therapy plus bicycle exercises and abdominal exercises for 3 months, Group (2): were received ultrasonic cavitation therapy plus bicycle exercises and abdominal exercises for 3 months, and Group (3): were received bicycle exercises and abdominal exercises for 3 months. Methods: data were obtained for each patient from waist circumferences, skin fold and ultrasonography measurements were done after six weeks postoperative (preexercise) and at three months postoperative. The physical therapy program began, six weeks postoperative for experimental group. Including aerobic exercises performed on the stationary bicycle, for 30 min, 3 sessions per week for three months Results: showed a statistically significant decrease in waist circumferences, skin fold and ultrasonography measurements in the three groups, with a higher rate of reduction in Group (1) and Group (2) .Also there was a non-significant difference between Group (1) and Group (2). Conclusion: these results suggested that bothlow level laser therapy and ultrasonic cavitation had a significant effect on abdominal adiposity after gastric bypass in female.",
"title": ""
},
{
"docid": "3a497f0634a56ba975948d8bd18e8af8",
"text": "In this paper we evaluate the WER improvement from modeling pronunciation probabilities and word-specific silence probabilities in speech recognition. We do this in the context of Finite State Transducer (FST)-based decoding, where pronunciation and silence probabilities are encoded in the lexicon (L) transducer. We describe a novel way to model word-dependent silence probabilities, where in addition to modeling the probability of silence following each individual word, we also model the probability of each word appearing after silence. All of these probabilities are estimated from aligned training data, with suitable smoothing. We conduct our experiments on four commonly used automatic speech recognition datasets, namely Wall Street Journal, Switchboard, TED-LIUM, and Librispeech. The improvement from modeling pronunciation and silence probabilities is small but fairly consistent across datasets.",
"title": ""
},
{
"docid": "862f795008ce9f622b5418430adcdeda",
"text": "BACKGROUND\nFeedback is an essential element of the educational process for clinical trainees. Performance-based feedback enables good habits to be reinforced and faulty ones to be corrected. Despite its importance, most trainees feel that they do not receive adequate feedback and if they do, the process is not effective.\n\n\nAIMS AND METHODS\nThe authors reviewed the literature on feedback and present the following 12 tips for clinical teachers to provide effective feedback to undergraduate and graduate medical trainees. In most of the tips, the focus is the individual teacher in clinical settings, although some of the suggestions are best adopted at the institutional level.\n\n\nRESULTS\nClinical educators will find the tips practical and easy to implement in their day-to-day interactions with learners. The techniques can be applied in settings whether the time for feedback is 5 minutes or 30 minutes.\n\n\nCONCLUSIONS\nClinical teachers can improve their skills for giving feedback to learners by using the straightforward and practical tools described in the subsequent sections. Institutions should emphasise the importance of feedback to their clinical educators, provide staff development and implement a mechanism by which the quantity and quality of feedback is monitored.",
"title": ""
},
{
"docid": "aa57c38307c4473aadb84b36ede7e3d8",
"text": "OBJECTIVE\nAlthough variable-damping knee prostheses offer some improvements over mechanically passive prostheses to transfemoral amputees, there is insufficient evidence that such prostheses provide advantages at self-selected walking speeds. In this investigation, we address this question by comparing two variable-damping knees, the hydraulic-based Otto Bock C-leg and the magnetorheological-based Ossur Rheo, with the mechanically passive, hydraulic-based Mauch SNS.\n\n\nDESIGN\nFor each prosthesis, metabolic data were collected on eight unilateral amputees walking at self-selected speeds across an indoor track. Furthermore, kinetic, kinematic, and electromyographic data were collected while walking at self-selected speeds across a 10-m walkway in a laboratory.\n\n\nRESULTS\nWhen using the Rheo, metabolic rate decreases by 5% compared with the Mauch and by 3% compared with the C-leg. Furthermore, for the C-leg and Rheo knee devices, we observe biomechanical advantages over the mechanically passive Mauch. These advantages include an enhanced smoothness of gait, a decrease in hip work production, a lower peak hip flexion moment at terminal stance, and a reduction in peak hip power generation at toe-off.\n\n\nCONCLUSION\nThe results of this study indicate that variable-damping knee prostheses offer advantages over mechanically passive designs for unilateral transfemoral amputees walking at self-selected ambulatory speeds, and the results further suggest that a magnetorheological-based system may have advantages over hydraulic-based designs.",
"title": ""
},
{
"docid": "d353db098a7ca3bd9dc73b803e7369a2",
"text": "DevOps community advocates collaboration between development and operations staff during software deployment. However this collaboration may cause a conceptual deficit. This paper proposes a Unified DevOps Model (UDOM) in order to overcome the conceptual deficit. Firstly, the origin of conceptual deficit is discussed. Secondly, UDOM model is introduced that includes three sub-models: application and data model, workflow execution model and infrastructure model. UDOM model can help to scale down deployment time, mitigate risk, satisfy customer requirements, and improve productivity. Finally, this paper can be a roadmap for standardization DevOps terminologies, concepts, patterns, cultures, and tools.",
"title": ""
},
{
"docid": "89c52082d42a9f6445a7771852db3330",
"text": "Total quality management (TQM) is an approach to management embracing both social and technical dimensions aimed at achieving excellent results, which needs to be put into practice through a specific framework. Nowadays, quality award models, such as the Malcolm Baldrige National Quality Award (MBNQA) and the European Foundation for Quality Management (EFQM) Excellence Model, are used as a guide to TQM implementation by a large number of organizations. Nevertheless, there is a paucity of empirical research confirming whether these models clearly reflect the main premises of TQM. The purpose of this paper is to analyze the extent to which the EFQM Excellence Model captures the main assumptions involved in the TQM concept, that is, the distinction between technical and social TQM issues, the holistic interpretation of TQM in the firm, and the causal linkage between TQM procedures and organizational performance. Based on responses collected from managers of 446 Spanish companies by means of a structured questionnaire, we find that: (a) social and technical dimensions are embedded in the model; (b) both dimensions are intercorrelated; (c) they jointly enhance results. These findings support the EFQM Excellence Model as an operational framework for TQM, and also reinforce the results obtained in previous studies for the MBNQA, suggesting that quality award models really are TQM frameworks. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +34 964 72 85 34; fax: +34 964 72 86 29. E-mail address: bou@emp.uji.es (J.C. Bou-Llusar).",
"title": ""
},
{
"docid": "b80b3633520313415cc454fdefc5d022",
"text": "The current study aimed to explore men’s experience of the UK Criminal Justice System (CJS) following female-perpetrated intimate partner violence (IPV). Unstructured face-to-face and Skype interviews were conducted with six men aged between 40–65 years. Interviews were transcribed and analysed using interpretative phenomenological analysis (IPA). Due to the method of analysis and the sensitive nature of the research, the researcher engaged in a process of reflexivity. Four main themes were identified, including ‘Guilty until Proven Innocent: Victim Cast as Perpetrator;’ ‘Masculine Identity;’ ‘Psychological Impact’ and ‘Light at the End of the Tunnel.’ Themes were discussed and illustrated with direct quotes drawn from the transcripts. Directions for future research, criminal justice interventions, and therapeutic interventions were discussed.",
"title": ""
},
{
"docid": "6087be6cef33af7d8fbfa55c8125bdb7",
"text": "Support Vector Machines (SVM) are the classifiers which were originally designed for binary classification. The classification applications can solve multi-class problems. Decision-tree-based support vector machine which combines support vector machines and decision tree can be an effective way for solving multi-class problems in Intrusion Detection Systems (IDS). This method can decrease the training and testing time of the IDS, increasing the efficiency of the system. The different ways to construct the binary trees divides the data set into two subsets from root to the leaf until every subset consists of only one class. The construction order of binary tree has great influence on the classification performance. In this paper we are studying two decision tree approaches: Hierarchical multiclass SVM and Tree structured multiclass SVM, to construct multiclass intrusion detection system.",
"title": ""
},
{
"docid": "3a0d2784b1115e82a4aedad074da8c74",
"text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c6a72ad90233fdf717b2538778d947cc",
"text": "This article proposes that neuroscience can shape future theory and models in consumer decision making and suggests ways that neuroscience methods can be used in decision-making research. The article argues that neuroscience facilitates better theory development and empirical testing by considering the physiological context and the role of constructs such as hunger, stress, and social influence on consumer choice and preferences. Neuroscience can also provide new explanations Mark Lett (2012) 23:473–485 DOI 10.1007/s11002-012-9188-z C. Yoon (*) Stephen M. Ross School of Business, University of Michigan, Ann Arbor, MI, USA e-mail: yoonc@umich.edu R. Gonzalez : I. Liberzon University of Michigan, Ann Arbor, MI, USA A. Bechara University of Southern California, Los Angeles, CA, USA G. S. Berns Emory University, Atlanta, GA, USA A. A. Dagher : L. Dubé McGill University, Montreal, Canada S. A. Huettel Duke University, Durham, NC, USA J. W. Kable University of Pennsylvania, Philadelphia, PA, USA H. Plassmann INSEAD, Fontainebleau, France A. Smidts Erasmus University, Rotterdam, The Netherlands C. Spence University of Oxford, Oxford, UK for different sources of heterogeneity within and across populations, suggest novel hypotheses with respect to choices and underlying mechanisms that accord with an understanding of biology, and allow for the use of neural data to make better predictions about consumer behavior. The article suggests that despite some challenges associated with incorporating neuroscience into research on consumer decision processes, the use of neuroscience paradigms will produce a deeper understanding of decision making that can lead to the development of more effective decision aids and interventions.",
"title": ""
},
{
"docid": "4cb0d0d6f1823f108a3fc32e0c407605",
"text": "This paper describes a novel method to approximate instantaneous frequency of non-stationary signals through an application of fractional Fourier transform (FRFT). FRFT enables us to build a compact and accurate chirp dictionary for each windowed signal, thus the proposed approach offers improved computational efficiency, and good performance when compared with chirp atom method.",
"title": ""
},
{
"docid": "1af40b48f5ecccdbf375a4783656f637",
"text": "A novel pulsewidth modulation buck–boost ac chopper using regenerative dc snubbers is proposed and analyzed. Compared to the previous buck–boost ac choppers, ac snubbers causing power loss are eliminated using regenerative dc snubbers. Experimental results show that the proposed scheme gives good steady-state performance of the ac chopper, which coincides with the theoretical results.",
"title": ""
},
{
"docid": "6b200a2fe32af23d40fd45d340435892",
"text": "Otocephaly, characterized by mandibular hypoplasia or agnathia, ventromedial auricular malposition (melotia) and/or auricular fusion (synotia), and microstomia with oroglossal hypoplasia or aglossia, is an extremely rare anomalad, identified in less than 1 in 70,000 births. The malformation spectrum is essentially lethal, because of ventilatory problems, and represents a developmental field defect of blastogenesis primarily affecting thefirst branchial arch derivatives. Holoprosencephaly is the most commonly identified association, but skeletal, genitourinary, and cardiovascular anomalies, and situs inversus have been reported. Polyhydramnios may be the presenting feature, but prenatal diagnosis has been uncommon. We present five new cases of otocephaly, the largest published series to date, with comprehensive review of the literature and an update of research in the etiopathogenesis of this malformation complex. One of our cases had situs inversus, and two presented with unexplained polyhydramnios. Otocephaly, while quite rare, should be considered in the differential diagnosis of this gestational complication.",
"title": ""
},
{
"docid": "4e1ba3178e40738ccaf2c2d76dd417d8",
"text": "We present the results of a recent large-scale subjective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to assess the visual quality of digital videos as perceived by human observers are becoming increasingly important, due to the large number of applications that target humans as the end users of video. Owing to the many approaches to video quality assessment (VQA) that are being developed, there is a need for a diverse independent public database of distorted videos and subjective scores that is freely available. The resulting Laboratory for Image and Video Engineering (LIVE) Video Quality Database contains 150 distorted videos (obtained from ten uncompressed reference videos of natural scenes) that were created using four different commonly encountered distortion types. Each video was assessed by 38 human subjects, and the difference mean opinion scores (DMOS) were recorded. We also evaluated the performance of several state-of-the-art, publicly available full-reference VQA algorithms on the new database. A statistical evaluation of the relative performance of these algorithms is also presented. The database has a dedicated web presence that will be maintained as long as it remains relevant and the data is available online.",
"title": ""
},
{
"docid": "b8b3761b658e37783afb1157ef0844b5",
"text": "Biometric recognition refers to the automated recognition of individuals based on their biological and behavioral characteristics such as fingerprint, face, iris, and voice. The first scientific paper on automated fingerprint matching was published by Mitchell Trauring in the journal Nature in 1963. The first objective of this paper is to document the significant progress that has been achieved in the field of biometric recognition in the past 50 years since Trauring’s landmark paper. This progress has enabled current state-of-the-art biometric systems to accurately recognize individuals based on biometric trait(s) acquired under controlled environmental conditions from cooperative users. Despite this progress, a number of challenging issues continue to inhibit the full potential of biometrics to automatically recognize humans. The second objective of this paper is to enlist such challenges, analyze the solutions proposed to overcome them, and highlight the research opportunities in this field. One of the foremost challenges is the design of robust algorithms for representing and matching biometric samples obtained from uncooperative subjects under unconstrained environmental conditions (e.g., recognizing faces in a crowd). In addition, fundamental questions such as the distinctiveness and persistence of biometric traits need greater attention. Problems related to the security of biometric data and robustness of the biometric system against spoofing and obfuscation attacks, also remain unsolved. Finally, larger system-level issues like usability, user privacy concerns, integration with the end application, and return on investment have not been adequately addressed. Unlocking the full potential of biometrics through inter-disciplinary research in the above areas will not only lead to widespread adoption of this promising technology, but will also result in wider user acceptance and societal impact. c © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
078228fe2a87d76633e83b00e9778ab7
|
A survey of cross-lingual embedding models
|
[
{
"docid": "41e1985a761c31bddd3ff8c98e409482",
"text": "In many languages, sparse availability of resources causes numerous challenges for textual analysis tasks. Text classification is one of such standard tasks that is hindered due to limited availability of label information in lowresource languages. Transferring knowledge (i.e. label information) from high-resource to low-resource languages might improve text classification as compared to the other approaches like machine translation. We introduce BRAVE (Bilingual paRAgraph VEctors), a model to learn bilingual distributed representations (i.e. embeddings) of words without word alignments either from sentencealigned parallel or label-aligned non-parallel document corpora to support cross-language text classification. Empirical analysis shows that classification models trained with our bilingual embeddings outperforms other stateof-the-art systems on three different crosslanguage text classification tasks.",
"title": ""
},
{
"docid": "4b983214cbc0bf42ee8d04ebf8a31fa8",
"text": "We introduce BilBOWA (“Bilingual Bag-of-Words without Alignments”), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large datasets and does not require wordaligned training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient crosslingual feature learning. We show that bilingual embeddings learned using the proposed model outperforms state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on the WMT11 data. Our code will be made available as part of the open-source word2vec toolkit.",
"title": ""
},
{
"docid": "8acd410ff0757423d09928093e7e8f63",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
}
] |
[
{
"docid": "f8d0929721ba18b2412ca516ac356004",
"text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.",
"title": ""
},
{
"docid": "a24b4546eb2da7ce6ce70f45cd16e07d",
"text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.",
"title": ""
},
{
"docid": "7867544be1b36ffab85b02c63cb03922",
"text": "In this paper a general theory of multistage decimators and interpolators for sampling rate reduction and sampling rate increase is presented. A set of curves and the necessary relations for optimally designing multistage decimators is also given. It is shown that the processes of decimation and interpolation are duals and therefore the same set of design curves applies to both problems. Further, it is shown that highly efficient implementations of narrow-band finite impulse response (FIR) fiiters can be obtained by cascading the processes of decimation and interpolation. Examples show that the efficiencies obtained are comparable to those of recursive elliptic filter designs.",
"title": ""
},
{
"docid": "d1b6091e010cba3abc340efeab77a97b",
"text": "Recently, the term knowledge graph has been used frequently in research and business, usually in close association with Semantic Web technologies, linked data, large-scale data analytics and cloud computing. Its popularity is clearly influenced by the introduction of Google’s Knowledge Graph in 2012, and since then the term has been widely used without a definition. A large variety of interpretations has hampered the evolution of a common understanding of knowledge graphs. Numerous research papers refer to Google’s Knowledge Graph, although no official documentation about the used methods exists. The prerequisite for widespread academic and commercial adoption of a concept or technology is a common understanding, based ideally on a definition that is free from ambiguity. We tackle this issue by discussing and defining the term knowledge graph, considering its history and diversity in interpretations and use. Our goal is to propose a definition of knowledge graphs that serves as basis for discussions on this topic and contributes to a common vision.",
"title": ""
},
{
"docid": "72782fdcc61d1059bce95fe4e7872f5b",
"text": "ÐIn object prototype learning and similar tasks, median computation is an important technique for capturing the essential information of a given set of patterns. In this paper, we extend the median concept to the domain of graphs. In terms of graph distance, we introduce the novel concepts of set median and generalized median of a set of graphs. We study properties of both types of median graphs. For the more complex task of computing generalized median graphs, a genetic search algorithm is developed. Experiments conducted on randomly generated graphs demonstrate the advantage of generalized median graphs compared to set median graphs and the ability of our genetic algorithm to find approximate generalized median graphs in reasonable time. Application examples with both synthetic and nonsynthetic data are shown to illustrate the practical usefulness of the concept of median graphs. Index TermsÐMedian graph, graph distance, graph matching, genetic algorithm,",
"title": ""
},
{
"docid": "d03fa0dcb14dc19ef5eca5a564b70238",
"text": "Many requirements documents are written in natural language (NL). However, with the flexibility of NL comes the risk of introducing unwanted ambiguities in the requirements and misunderstandings between stakeholders. In this paper, we describe an automated approach to identify potentially nocuous ambiguity, which occurs when text is interpreted differently by different readers. We concentrate on anaphoric ambiguity, which occurs when readers may disagree on how pronouns should be interpreted. We describe a number of heuristics, each of which captures information that may lead a reader to favor a particular interpretation of the text. We use these heuristics to build a classifier, which in turn predicts the degree to which particular interpretations are preferred. We collected multiple human judgements on the interpretation of requirements exhibiting anaphoric ambiguity and showed how the distribution of these judgements can be used to assess whether a particular instance of ambiguity is nocuous. Given a requirements document written in natural language, our approach can identify sentences that contain anaphoric ambiguity, and use the classifier to alert the requirements writer of text that runs the risk of misinterpretation. We report on a series of experiments that we conducted to evaluate the performance of the automated system we developed to support our approach. The results show that the system achieves high recall with a consistent improvement on baseline precision subject to some ambiguity tolerance levels, allowing us to explore and highlight realistic and potentially problematic ambiguities in actual requirements documents.",
"title": ""
},
{
"docid": "daa74311dafd227aa4ca0ae7ccabf12f",
"text": "Memristive devices are novel structures, developed primarily as memory. Another interesting application for memristive devices is logic circuits. In this paper, MRL (Memristor Ratioed Logic) - a hybrid CMOS-memristive logic family - is described. In this logic family, OR and AND logic gates are based on memristive devices, and CMOS inverters are added to provide a complete logic structure and signal restoration. Unlike previously published memristive-based logic families, the MRL family is compatible with standard CMOS logic. A case study of an eight-bit full adder is presented and related design considerations are discussed.",
"title": ""
},
{
"docid": "6a96103c1e5eebf799d64588314165f9",
"text": "Office workers everywhere are drowning in email—not only spam, but also large quantities of legitimate email to be read and organized for browsing. Although there have been extensive investigations of automatic document categorization, email gives rise to a number of unique challenges, and there has been relatively little study of classifying email into folders. This paper presents an extensive benchmark study of email foldering using two large corpora of real-world email messages and foldering schemes: one from former Enron employees, another from participants in an SRI research project. We discuss the challenges that arise from differences between email foldering and traditional document classification. We show experimental results from an array of automated classification methods and evaluation methodologies, including a new evaluation method of foldering results based on the email timeline, and including enhancements to the exponential gradient method Winnow, providing top-tier accuracy with a fraction the training time of alternative methods. We also establish that classification accuracy in many cases is relatively low, confirming the challenges of email data, and pointing toward email foldering as an important area for further research.",
"title": ""
},
{
"docid": "e7659e2c20e85f99996e4394fdc37a5c",
"text": "Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.",
"title": ""
},
{
"docid": "ed4d43cbe03b1e9e58aadb9b6449be50",
"text": "network is a complex and expensive task which requires careful planning to achieve the required radio signal coverage. Commercial radio network planning tools are available, but they are expensive and inflexible in the sense that they are often limited to a particular radio network technology, its frequency band(s) and a fixed set of channel models. This led us to develop an open-source radio coverage simulation tool with user-extendible set of radio propagation models, which is especially suitable for research work but at the same time also for professional communication network planning. The tool, GRASS-RaPlaT, is based on the open-source Geographical Resources Analysis Support System (GRASS) and currently includes modules for a number of channel models, a module for sectorization according to given antenna radiation patterns, a module for calculating and storing the complete radio network coverage data, and a number of supporting modules, e.g. for adapting input data and analyzing simulation results. Its computation has been tested with existing real GSM network data, and the accuracy of results evaluated by comparing with those from a professional radio network planning tool.",
"title": ""
},
{
"docid": "59f2379402ccb14a8b6cbb0185ac7782",
"text": "Manipulation problems involving many objects present substantial challenges for motion planning algorithms due to the high dimensionality and multi-modality of the search space. Symbolic task planners can efficiently construct plans involving many entities but cannot incorporate the constraints from geometry and kinematics. In this paper, we show how to extend the heuristic ideas from one of the most successful symbolic planners in recent years, the FastForward (FF) planner, to motion planning, and to compute it efficiently. We use a multi-query roadmap structure that can be conditionalized to model different placements of movable objects. The resulting tightly integrated planner is simple and performs efficiently in a collection of tasks involving manipulation of many objects.",
"title": ""
},
{
"docid": "8bc615dfa51a9c5835660c1b0eb58209",
"text": "Large scale grid connected photovoltaic (PV) energy conversion systems have reached the megawatt level. This imposes new challenges on existing grid interface converter topologies and opens new opportunities to be explored. In this paper a new medium voltage multilevel-multistring configuration is introduced based on a three-phase cascaded H-bridge (CHB) converter and multiple string dc-dc converters. The proposed configuration enables a large increase of the total capacity of the PV system, while improving power quality and efficiency. The converter structure is very flexible and modular since it decouples the grid converter from the PV string converter, which allows to accomplish independent control goals. The main challenge of the proposed configuration is to handle the inherent power imbalances that occur not only between the different cells of one phase of the converter but also between the three phases. The control strategy to deal with these imbalances is also introduced in this paper. Simulation results of a 7-level CHB for a multistring PV system are presented to validate the proposed topology and control method.",
"title": ""
},
{
"docid": "60606403844df78f3d2a569813fdac96",
"text": "Charge transport models developed for disordered organic semiconductors predict a non-Arrhenius temperature dependence ln(mu) proportional, variant1/T(2) for the mobility mu. We demonstrate that in space-charge limited diodes the hole mobility (micro(h)) of a large variety of organic semiconductors shows a universal Arrhenius temperature dependence micro(h)(T) = micro(0)exp(-Delta/kT) at low fields, due to the presence of extrinsic carriers from the Ohmic contact. The transport in a range of organic semiconductors, with a variation in room temperature mobility of more than 6 orders of magnitude, is characterized by a universal mobility micro(0) of 30-40 cm(2)/V s. As a result, we can predict the full temperature dependence of their charge transport properties with only the mobility at one temperature known.",
"title": ""
},
{
"docid": "bf8ff16c84997fa12e1ae8bee1000565",
"text": "The demand for cloud computing is increasing dramatically due to the high computational requirements of business, social, web and scientific applications. Nowadays, applications and services are hosted on the cloud in order to reduce the costs of hardware, software and maintenance. To satisfy this high demand, the number of large-scale data centers has increased, which consumes a high volume of electrical power, has a negative impact on the environment, and comes with high operational costs. In this paper, we discuss many ongoing or implemented energy aware resource allocation techniques for cloud environments. We also present a comprehensive review on the different energy aware resource allocation and selection algorithms for virtual machines in the cloud. Finally, we come up with further research issues and challenges for future cloud environments.",
"title": ""
},
{
"docid": "a88cbc1a779763fe6724f732c20b423a",
"text": "Surface Acoustic Wave (SAW) devices, are not normally amenable to simulation through circuit simulators. In this letter, an electrical macromodel of Mason's Equivalent Circuit for an interdigital transducer (IDT) is proposed which is compatible to a widely used general purpose circuit simulator SPICE endowed with the capability to handle negative capacitances and inductances. Illustrations have been given to demonstrate the simplicity of ascertaining the frequency and time domain characteristics of IDT and amenability to simulate the IDT along with other external circuit elements.<<ETX>>",
"title": ""
},
{
"docid": "a5052a27ebbfb07b02fa18b3d6bff6fc",
"text": "Popular techniques for domain adaptation such as the feature augmentation method of Daumé III (2009) have mostly been considered for sparse binary-valued features, but not for dense realvalued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daumé III (2009) applied on feature-rich CRFs.",
"title": ""
},
{
"docid": "c3c0de7f448c08ff8316ac2caed78b87",
"text": "Wearable robots, i.e. active orthoses, exoskeletons, and mechatronic prostheses, represent a class of biomechatronic systems posing severe constraints in terms of safety and controllability. Additionally, whenever the worn system is required to establish a well-tuned dynamic interaction with the human body, in order to exploit emerging dynamical behaviours, the possibility of having modular joints, able to produce a controllable viscoelastic behaviour, becomes crucial. Controllability is a central issue in wearable robotics applications, because it impacts robot safety and effectiveness. Under this regard, DC motors offer very good performances, provided that a proper mounting scheme is used in order to mimic the typical viscoelastici behaviour exhibited by biological systems, as required by the selected application. In this paper we report on the design of two compact devices for controlling the active and passive torques applied to the joint of a wearable robot for the lower limbs. The first device consists of a rotary Serial Elastic Actuator (SEA), incorporating a custom made torsion spring. The second device is a purely mechanical passive viscoelastici joint, functionally equivalent to a torsion spring mounted in parallel to a rotary viscous damper. The torsion stiffness and the damping coefficient can be easily tuned by acting on specific elements, thanks to the modular design of the device. The working principles and basic design choices regarding the overall architectures and the single components are presented and discussed.",
"title": ""
},
{
"docid": "ad0e17662f204f2617d672c6f1e01942",
"text": "Cholera toxin (CT), an AB(5)-subunit toxin, enters host cells by binding the ganglioside GM1 at the plasma membrane (PM) and travels retrograde through the trans-Golgi Network into the endoplasmic reticulum (ER). In the ER, a portion of CT, the enzymatic A1-chain, is unfolded by protein disulfide isomerase and retro-translocated to the cytosol by hijacking components of the ER associated degradation pathway for misfolded proteins. After crossing the ER membrane, the A1-chain refolds in the cytosol and escapes rapid degradation by the proteasome to induce disease by ADP-ribosylating the large G-protein Gs and activating adenylyl cyclase. Here, we review the mechanisms of toxin trafficking by GM1 and retro-translocation of the A1-chain to the cytosol.",
"title": ""
},
{
"docid": "bb5977b6bb06aa7bdcb7f0a74adf3271",
"text": "Aspect-level sentiment classification aims to identify the sentiment expressed towards some aspects given context sentences. In this paper, we introduce an attention-over-attention (AOA) neural network for aspect level sentiment classification. Our approach models aspects and sentences in a joint way and explicitly captures the interaction between aspects and context sentences. With the AOA module, our model jointly learns the representations for aspects and sentences, and automatically focuses on the important parts in sentences. Our experiments on laptop and restaurant datasets demonstrate our approach outperforms previous LSTM-based architectures.",
"title": ""
},
{
"docid": "531a7417bd66ff0fdd7fb35c7d6d8559",
"text": "G. R. White University of Sussex, Brighton, UK Abstract In order to design new methodologies for evaluating the user experience of video games, it is imperative to initially understand two core issues. Firstly, how are video games developed at present, including components such as processes, timescales and staff roles, and secondly, how do studios design and evaluate the user experience. This chapter will discuss the video game development process and the practices that studios currently use to achieve the best possible user experience. It will present four case studies from game developers Disney Interactive (Black Rock Studio), Relentless, Zoe Mode, and HandCircus, each detailing their game development process and also how this integrates with the user experience evaluation. The case studies focus on different game genres, platforms, and target user groups, ensuring that this chapter represents a balanced view of current practices in evaluating user experience during the game development process.",
"title": ""
}
] |
scidocsrr
|
5c8d41242b767e84e2557c8fc67740a8
|
Machine learning methods in chemoinformatics
|
[
{
"docid": "2052b47be2b5e4d0c54ab0be6ae1958b",
"text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .",
"title": ""
}
] |
[
{
"docid": "074d9b68f1604129bcfdf0bb30bbd365",
"text": "This paper describes a methodology for semi-supervised learning of dialogue acts using the similarity between sentences. We suppose that the dialogue sentences with the same dialogue act are more similar in terms of semantic and syntactic information. However, previous work on sentence similarity mainly modeled a sentence as bag-of-words and then compared different groups of words using corpus-based or knowledge-based measurements of word semantic similarity. Novelly, we present a vector-space sentence representation, composed of word embeddings, that is, the related word distributed representations, and these word embeddings are organised in a sentence syntactic structure. Given the vectors of the dialogue sentences, a distance measurement can be well-defined to compute the similarity between them. Finally, a seeded k-means clustering algorithm is implemented to classify the dialogue sentences into several categories corresponding to particular dialogue acts. This constitutes the semi-supervised nature of the approach, which aims to ameliorate the reliance of the availability of annotated corpora. Experiments with Switchboard Dialog Act corpus show that classification accuracy is improved by 14%, compared to the state-of-art methods based on Support Vector Machine.",
"title": ""
},
{
"docid": "139915d2aaf3698093b73ca81ebd7ad8",
"text": "When caring for patients, it is essential that nurses are using the current best practice. To determine what this is, nurses must be able to read research critically. But for many qualified and student nurses, the terminology used in research can be difficult to understand, thus making critical reading even more daunting. It is imperative in nursing that care has its foundations in sound research, and it is essential that all nurses have the ability to critically appraise research to identify what is best practice. This article is a step-by-step approach to critiquing quantitative research to help nurses demystify the process and decode the terminology.",
"title": ""
},
{
"docid": "22bd367cdda112e715f7c5535bc72ebb",
"text": "This paper introduces a complete side channel analysis toolbox, inclusive of the analog capture hardware, target device, capture software, and analysis software. The highly modular design allows use of the hardware and software with a variety of existing systems. The hardware uses a synchronous capture method which greatly reduces the required sample rate, while also reducing the data storage requirement, and improving synchronization of traces. The synchronous nature of the hardware lends itself to fault injection, and a module to generate glitches of programmable width is also provided. The entire design (hardware and software) is open-source, and maintained in a publicly available repository. Several long example capture traces are provided for researchers looking to evaluate standard cryptographic implementations.",
"title": ""
},
{
"docid": "ba6873627b976fa1a3899303b40eae3c",
"text": "Most plant seeds are dispersed in a dry, mature state. If these seeds are non-dormant and the environmental conditions are favourable, they will pass through the complex process of germination. In this review, recent progress made with state-of-the-art techniques including genome-wide gene expression analyses that provided deeper insight into the early phase of seed germination, which includes imbibition and the subsequent plateau phase of water uptake in which metabolism is reactivated, is summarized. The physiological state of a seed is determined, at least in part, by the stored mRNAs that are translated upon imbibition. Very early upon imbibition massive transcriptome changes occur, which are regulated by ambient temperature, light conditions, and plant hormones. The hormones abscisic acid and gibberellins play a major role in regulating early seed germination. The early germination phase of Arabidopsis thaliana culminates in testa rupture, which is followed by the late germination phase and endosperm rupture. An integrated view on the early phase of seed germination is provided and it is shown that it is characterized by dynamic biomechanical changes together with very early alterations in transcript, protein, and hormone levels that set the stage for the later events. Early seed germination thereby contributes to seed and seedling performance important for plant establishment in the natural and agricultural ecosystem.",
"title": ""
},
{
"docid": "0bf88df55230271c61966f90485cde00",
"text": "BACKGROUND\nNewer approaches for understanding suicidal behavior suggest the assessment of suicide-specific beliefs and cognitions may improve the detection and prediction of suicidal thoughts and behaviors. The Suicide Cognitions Scale (SCS) was developed to measure suicide-specific beliefs, but it has not been tested in a military setting.\n\n\nMETHODS\nData were analyzed from two separate studies conducted at three military mental health clinics (one U.S. Army, two U.S. Air Force). Participants included 175 active duty Army personnel with acute suicidal ideation and/or a recent suicide attempt referred for a treatment study (Sample 1) and 151 active duty Air Force personnel receiving routine outpatient mental health care (Sample 2). In both samples, participants completed self-report measures and clinician-administered interviews. Follow-up suicide attempts were assessed via clinician-administered interview for Sample 1. Statistical analyses included confirmatory factor analysis, between-group comparisons by history of suicidality, and generalized regression modeling.\n\n\nRESULTS\nTwo latent factors were confirmed for the SCS: Unloveability and Unbearability. Each demonstrated good internal consistency, convergent validity, and divergent validity. Both scales significantly predicted current suicidal ideation (βs >0.316, ps <0.002) and significantly differentiated suicide attempts from nonsuicidal self-injury and control groups (F(6, 286)=9.801, p<0.001). Both scales significantly predicted future suicide attempts (AORs>1.07, ps <0.050) better than other risk factors.\n\n\nLIMITATIONS\nSelf-report methodology, small sample sizes, predominantly male samples.\n\n\nCONCLUSIONS\nThe SCS is a reliable and valid measure that predicts suicidal ideation and suicide attempts among military personnel better than other well-established risk factors.",
"title": ""
},
{
"docid": "9e638e09b77463e8c232c7960d49a544",
"text": "Force feedback coupled with visual display allows people to interact intuitively with complex virtual environments. For this synergy of haptics and graphics to flourish, however, haptic systems must be capable of modeling environments with the same richness, complexity and interactivity that can be found in existing graphic systems. To help meet this challenge, we have developed a haptic rendering system that allows f r the efficient tactile display of graphical information. The system uses a common high-level framework to model contact constraints, surface shading, friction and tex ture. The multilevel control system also helps ensure that the haptic device will remain stable even as the limits of the renderer’s capabilities are reached. CR",
"title": ""
},
{
"docid": "3f7df0e43b3f954c11a4b3f330c9e437",
"text": "In recent years, Cloud computing is gaining much popularity as it can efficiently utilize the computing resources and hence can contribute to the issue of Green IT to save energy. So to make the Cloud services commercialized, Cloud markets are necessary and are being developed. As the increasing numbers of various Cloud services are rapidly evolving in the Cloud market, how to select the best and optimal services will be a great challenge. In this paper we present a Cloud service selection framework in the Cloud market that uses a recommender system (RS) which helps a user to select the best services from different Cloud providers (CP) that matches user requirements. The RS recommends a service based on the network QoS and Virtual Machine (VM) platform factors of difference CPs. The experimental results show that our Cloud service recommender system (CSRS) can effectively recommend a good combination of Cloud services to consumers.",
"title": ""
},
{
"docid": "bace1e6a258f659fb1db2a1a50c6aaac",
"text": "Join is one of the most important operators in database query processing. Its research progressively focuses on hardware-conscious implementations since single-threaded performance improvements of general-purpose processors will slow down in the next years. SIMD extensions, multithreading as well as multi-core processors may further lead to performance advantages. Besides that, multiprocessor system-on-chips (MPSoCs) are a suitable platform to keep up with high-performance processors while providing an up to three orders of magnitude lower power consumption. In this paper, we study the implementation of hash join algorithms on MPSoCs and exemplarily employ the Tomahawk4 chip. Tomahawk4 integrates four processing modules each equipped with tightly-coupled SRAM as well as an instruction set extension tailored to hashing algorithms. An external DRAM serves as shared main memory and can be accessed by DMA transfers. We aim to best exploit the architecture and to adapt the algorithms to the MPSoC. Hence, we compare two hash table designs according to their memory accesses and investigate the performance impact of the additional hashing instructions. Furthermore, the MPSoC platform allows power measurements with different clock frequencies and supply voltages to find the configuration with highest energy savings. Our experiments on the MPSoC show that four database-specific cores outperform a standard RISC CPU by up to factor 5 while consuming less than 200 mW.",
"title": ""
},
{
"docid": "3cef1c7c440f0d6dc2be8ca367dc04e8",
"text": "We introduce FontCode, an information embedding technique for text documents. Provided a text document with specific fonts, our method embeds user-specified information in the text by perturbing the glyphs of text characters while preserving the text content. We devise an algorithm to choose unobtrusive yet machine-recognizable glyph perturbations, leveraging a recently developed generative model that alters the glyphs of each character continuously on a font manifold. We then introduce an algorithm that embeds a user-provided message in the text document and produces an encoded document whose appearance is minimally perturbed from the original document. We also present a glyph recognition method that recovers the embedded information from an encoded document stored as a vector graphic or pixel image, or even on a printed paper. In addition, we introduce a new error-correction coding scheme that rectifies a certain number of recognition errors. Lastly, we demonstrate that our technique enables a wide array of applications, using it as a text document metadata holder, an unobtrusive optical barcode, a cryptographic message embedding scheme, and a text document signature.",
"title": ""
},
{
"docid": "03d16e92557d16cfba5249bf69863f3d",
"text": "PURPOSE\nThe aim of this study was to investigate effects of a multimodal treatment of phonology, phonomotor treatment, on the reading abilities of persons with aphasia (PWA) with phonological alexia.\n\n\nMETHOD\nIn a retrospective, single-group design, this study presents pre-, post-, and 3-months posttreatment data for 8 PWA with phonological alexia. Participants completed 60 hr of phonomotor treatment over 6 weeks. Wilcoxon signed-ranks tests and group effect sizes comparing pre-, immediately post-, and 3-months posttreatment performance on tests of phonological processing and reading were performed.\n\n\nRESULTS\nGroup data showed phonological processing and oral reading of real words and nonwords improved significantly posttreatment; these gains were maintained 3 months later. No group improvement was found for reading comprehension; however, one individual did show improvement immediately post- and 3-months posttreatment.\n\n\nCONCLUSIONS\nThis study provides support that phonomotor treatment is a viable approach to improve phonological processing and oral reading for PWA with phonological alexia. The lack of improvement with comprehension is inconsistent with prior work using similar treatments (Conway et al., 1998; Kendall et al., 2003). However, this difference can, in part, be accounted for by differences in variables, such as treatment intensity and frequency, outcome measures, and alexia severity.",
"title": ""
},
{
"docid": "660998f8595df10e67bdb550c7ac5a5c",
"text": "The role of information technology (IT) in education has significantly increased, but resistance to technology by public school teachers worldwide remains high. This study examined public school teachers’ technology acceptance decision-making by using a research model that is based on key findings from relevant prior research and important characteristics of the targeted user acceptance phenomenon. The model was longitudinally tested using responses from more than 130 teachers attending an intensive 4-week training program on Microsoft PowerPoint, a common but important classroom presentation technology. In addition to identifying key acceptance determinants, we examined plausible changes in acceptance drivers over the course of the training, including their influence patterns and magnitudes. Overall, our model showed a reasonably good fit with the data and exhibited satisfactory explanatory power, based on the responses collected from training commencement and completion. Our findings suggest a highly prominent and significant core influence path from job relevance to perceived usefulness and then technology acceptance. Analysis of data collected at the beginning and the end of the training supports most of our hypotheses and sheds light on plausible changes in their influences over time. Specifically, teachers appear to consider a rich set of factors in initial acceptance but concentrate on fundamental determinants (e.g. perceived usefulness and perceived ease of use) in their continued acceptance. # 2003 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "7f0005d98f2ca7b0c58764e801e62601",
"text": "Modern optimizing compilers use several passes over a program's intermediate representation to generate good code. Many of these optimizations exhibit a phase-ordering problem. Getting the best code may require iterating optimizations until a fixed point is reached. Combining these phases can lead to the discovery of more facts about the program, exposing more opportunities for optimization. This article presents a framework for describing optimizations. It shows how to combine two such frameworks and how to reason about the properties of the resulting framework. The structure of the frame work provides insight into when a combination yields better results. To make the ideas more concrete, this article presents a framework for combining constant propagation, value numbering, and unreachable-code elimination. It is an open question as to what other frameworks can be combined in this way.",
"title": ""
},
{
"docid": "a0b9b40328c03cbbe801e027fb793117",
"text": "BACKGROUND\nA better knowledge of the job aspects that may predict home health care nurses' burnout and work engagement is important in view of stress prevention and health promotion. The Job Demands-Resources model predicts that job demands and resources relate to burnout and work engagement but has not previously been tested in the specific context of home health care nursing.\n\n\nPURPOSE\nThe present study offers a comprehensive test of the Job-Demands Resources model in home health care nursing. We investigate the main and interaction effects of distinctive job demands (workload, emotional demands and aggression) and resources (autonomy, social support and learning opportunities) on burnout and work engagement.\n\n\nMETHODS\nAnalyses were conducted using cross-sectional data from 675 Belgian home health care nurses, who participated in a voluntary and anonymous survey.\n\n\nRESULTS\nThe results show that workload and emotional demands were positively associated with burnout, whereas aggression was unrelated to burnout. All job resources were associated with higher levels of work engagement and lower levels of burnout. In addition, social support buffered the positive relationship between workload and burnout.\n\n\nCONCLUSIONS\nHome health care organizations should invest in dealing with workload and emotional demands and stimulating the job resources under study to reduce the risk of burnout and increase their nurses' work engagement.",
"title": ""
},
{
"docid": "b89259a915856b309a02e6e7aa6c957f",
"text": "The paper proposes a comprehensive information security maturity model (ISMM) that addresses both technical and socio/non-technical security aspects. The model is intended for securing e-government services (implementation and service delivery) in an emerging and increasing security risk environment. The paper utilizes extensive literature review and survey study approaches. A total of eight existing ISMMs were selected and critically analyzed. Models were then categorized into security awareness, evaluation and management orientations. Based on the model’s strengths – three models were selected to undergo further analyses and then synthesized. Each of the three selected models was either from the security awareness, evaluation or management orientations category. To affirm the findings – a survey study was conducted into six government organizations located in Tanzania. The study was structured to a large extent by the security controls adopted from the Security By Consensus (SBC) model. Finally, an ISMM with five critical maturity levels was proposed. The maturity levels were: undefined, defined, managed, controlled and optimized. The papers main contribution is the proposed model that addresses both technical and non-technical security services within the critical maturity levels. Additionally, the paper enhances awareness and understanding on the needs for security in e-government services to stakeholders.",
"title": ""
},
{
"docid": "25779dfc55dc29428b3939bb37c47d50",
"text": "Human daily activity recognition using mobile personal sensing technology plays a central role in the field of pervasive healthcare. One major challenge lies in the inherent complexity of human body movements and the variety of styles when people perform a certain activity. To tackle this problem, in this paper, we present a novel human activity recognition framework based on recently developed compressed sensing and sparse representation theory using wearable inertial sensors. Our approach represents human activity signals as a sparse linear combination of activity signals from all activity classes in the training set. The class membership of the activity signal is determined by solving a l1 minimization problem. We experimentally validate the effectiveness of our sparse representation-based approach by recognizing nine most common human daily activities performed by 14 subjects. Our approach achieves a maximum recognition rate of 96.1%, which beats conventional methods based on nearest neighbor, naive Bayes, and support vector machine by as much as 6.7%. Furthermore, we demonstrate that by using random projection, the task of looking for “optimal features” to achieve the best activity recognition performance is less important within our framework.",
"title": ""
},
{
"docid": "0784d5907a8e5f1775ad98a25b1b0b31",
"text": "The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.",
"title": ""
},
{
"docid": "b8f65d1679388c3d23a7d9dcadd19a9c",
"text": "Topic sentiment joint model is an extended model which aims to deal with the problem of detecting sentiments and topics simultaneously from online reviews. Most of existing topic sentiment joint modeling algorithms infer resulting distributions from the co-occurrence of words. But when the training corpus is short and small, the resulting distributions might be not very satisfying. In this paper, we propose a novel topic sentiment joint model with word embeddings (TSWE), which introduces word embeddings trained on external large corpus. Furthermore, we implement TSWE with Gibbs sampling algorithms. The experiment results on Chinese and English data sets show that TSWE achieves significant performance in the task of detecting sentiments and topics simultaneously.",
"title": ""
},
{
"docid": "8eb62d4fdc1be402cd9216352cb7cfc3",
"text": "In an attempt to better understand generalization in deep learning, we study several possible explanations. We show that implicit regularization induced by the optimization method is playing a key role in generalization and success of deep learning models. Motivated by this view, we study how different complexity measures can ensure generalization and explain how optimization algorithms can implicitly regularize complexity measures. We empirically investigate the ability of these measures to explain different observed phenomena in deep learning. We further study the invariances in neural networks, suggest complexity measures and optimization algorithms that have similar invariances to those in neural networks and evaluate them on a number of learning tasks. Thesis Advisor: Nathan Srebro Title: Professor",
"title": ""
},
{
"docid": "ec37e61fcac2639fa6e605b362f2a08d",
"text": "Keyphrases that efficiently summarize a document’s content are used in various document processing and retrieval tasks. Current state-of-the-art techniques for keyphrase extraction operate at a phrase-level and involve scoring candidate phrases based on features of their component words. In this paper, we learn keyphrase taggers for research papers using token-based features incorporating linguistic, surfaceform, and document-structure information through sequence labeling. We experimentally illustrate that using withindocument features alone, our tagger trained with Conditional Random Fields performs on-par with existing state-of-the-art systems that rely on information from Wikipedia and citation networks. In addition, we are also able to harness recent work on feature labeling to seamlessly incorporate expert knowledge and predictions from existing systems to enhance the extraction performance further. We highlight the modeling advantages of our keyphrase taggers and show significant performance improvements on two recently-compiled datasets of keyphrases from Computer Science research papers.",
"title": ""
}
] |
scidocsrr
|
3b24192d415527372dca571d0fe4c230
|
Shadow Suppression using RGB and HSV Color Space in Moving Object Detection
|
[
{
"docid": "af752d0de962449acd9a22608bd7baba",
"text": "Ð R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.",
"title": ""
}
] |
[
{
"docid": "ced13f6c3e904f5bd833e2f2621ae5e2",
"text": "A growing amount of research focuses on learning in group settings and more specifically on learning in computersupported collaborative learning (CSCL) settings. Studies on western students indicate that online collaboration enhances student learning achievement; however, few empirical studies have examined student satisfaction, performance, and knowledge construction through online collaboration from a cross-cultural perspective. This study examines satisfaction, performance, and knowledge construction via online group discussions of students in two different cultural contexts. Students were both first-year university students majoring in educational sciences at a Flemish university and a Chinese university. Differences and similarities of the two groups of students with regard to satisfaction, learning process, and achievement were analyzed.",
"title": ""
},
{
"docid": "6bbed2c899db4439ba1f31004e15a040",
"text": "Compiler-component generators, such as lexical analyzer generators and parser generators, have long been used to facilitate the construction of compilers. A tree-manipulation language called twig has been developed to help construct efficient code generators. Twig transforms a tree-translation scheme into a code generator that combines a fast top-down tree-pattern matching algorithm with dynamic programming. Twig has been used to specify and construct code generators for several experimental compilers targeted for different machines.",
"title": ""
},
{
"docid": "ac8baab85f1c66b3aa74426e3b8fce14",
"text": "OBJECTIVE\nTo evaluate a web-based contingency management program (CM) and a phone-delivered cessation counseling program (Smoking Cessation for Healthy Births [SCHB]) with pregnant smokers in rural Appalachia who were ≤12 weeks gestation at enrollment.\n\n\nDESIGN\nTwo group randomized design.\n\n\nSETTING\nHome-based cessation programs in rural Appalachia Ohio and Kentucky.\n\n\nPARTICIPANTS\nA community sample of pregnant smokers (N = 17).\n\n\nMETHODS\nParticipants completed demographic and smoking-related questionnaires and were assigned to CM (n = 7) or SCHB (n = 10) conditions. Smoking status was assessed monthly using breath carbon monoxide and urinary cotinine.\n\n\nRESULTS\nFor CM, two of seven (28.57%) of the participants achieved abstinence, and three of 10 (30%) of those enrolled in SCHB were abstinent by late in pregnancy. Participants in CM attained abstinence more rapidly than those in SCHB. However, those in SCHB experienced less relapse to smoking, and a greater percentage of these participants reduced their smoking by at least 50%.\n\n\nCONCLUSION\nBased on this initial evaluation, the web-based CM and SCHB programs appeared to be feasible for use with rural pregnant smokers with acceptable program adherence for both approaches. Future researchers could explore combining these programs to capitalize on the strengths of each, for example, rapid smoking cessation based on CM incentives and better sustained cessation or reductions in smoking facilitated by the counseling support of SCHB.",
"title": ""
},
{
"docid": "a3e6d006a56913285d1eb6f0a8e1ce55",
"text": "This paper updates and builds on ‘Modelling with Stakeholders’ Voinov and Bousquet, 2010 which demonstrated the importance of, and demand for, stakeholder participation in resource and environmental modelling. This position paper returns to the concepts of that publication and reviews the progress made since 2010. A new development is the wide introduction and acceptance of social media and web applications, which dramatically changes the context and scale of stakeholder interactions and participation. Technology advances make it easier to incorporate information in interactive formats via visualization and games to augment participatory experiences. Citizens as stakeholders are increasingly demanding to be engaged in planning decisions that affect them and their communities, at scales from local to global. How people interact with and access models and data is rapidly evolving. In turn, this requires changes in how models are built, packaged, and disseminated: citizens are less in awe of experts and external authorities, and they are increasingly aware of their own capabilities to provide inputs to planning processes, including models. The continued acceleration of environmental degradation and natural resource depletion accompanies these societal changes, even as there is a growing acceptance of the need to transition to alternative, possibly very different, life styles. Substantive transitions cannot occur without significant changes in human behaviour and perceptions. The important and diverse roles that models can play in guiding human behaviour, and in disseminating and increasing societal knowledge, are a feature of stakeholder processes today. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d001d61e90dd38eb0eab0c8d4af9d2a6",
"text": "Wireless LANs, especially WiFi, have been pervasively deployed and have fostered myriad wireless communication services and ubiquitous computing applications. A primary concern in designing each scenario-tailored application is to combat harsh indoor propagation environments, particularly Non-Line-Of-Sight (NLOS) propagation. The ability to distinguish Line-Of-Sight (LOS) path from NLOS paths acts as a key enabler for adaptive communication, cognitive radios, robust localization, etc. Enabling such capability on commodity WiFi infrastructure, however, is prohibitive due to the coarse multipath resolution with mere MAC layer RSSI. In this work, we dive into the PHY layer and strive to eliminate irrelevant noise and NLOS paths with long delays from the multipath channel responses. To further break away from the intrinsic bandwidth limit of WiFi, we extend to the spatial domain and harness natural mobility to magnify the randomness of NLOS paths while retaining the deterministic nature of the LOS component. We prototype LiFi, a statistical LOS identification scheme for commodity WiFi infrastructure and evaluate it in typical indoor environments covering an area of 1500 m2. Experimental results demonstrate an overall LOS identification rate of 90.4% with a false alarm rate of 9.3%.",
"title": ""
},
{
"docid": "633c906446a11252c3ab9e0aad20189c",
"text": "The term \" gamification \" is generally used to denote the application of game mechanisms in non‐gaming environments with the aim of enhancing the processes enacted and the experience of those involved. In recent years, gamification has become a catchword throughout the fields of education and training, thanks to its perceived potential to make learning more motivating and engaging. This paper is an attempt to shed light on the emergence and consolidation of gamification in education/training. It reports the results of a literature review that collected and analysed around 120 papers on the topic published between 2011 and 2014. These originate from different countries and deal with gamification both in training contexts and in formal educational, from primary school to higher education. The collected papers were analysed and classified according to various criteria, including target population, type of research (theoretical vs experimental), kind of educational contents delivered, and the tools deployed. The results that emerge from this study point to the increasing popularity of gamification techniques applied in a wide range of educational settings. At the same time, it appears that over the last few years the concept of gamification has become more clearly defined in the minds of researchers and practitioners. Indeed, until fairly recently the term was used by many to denote the adoption of game artefacts (especially digital ones) as educational tools for learning a specific subject such as algebra. In other words, it was used as a synonym of Game Based Learning (GBL) rather than to identify an educational strategy informing the overall learning process, which is treated globally as a game or competition. However, this terminological confusion appears only in a few isolated cases in this literature review, suggesting that a certain level of taxonomic and epistemological convergence is underway.",
"title": ""
},
{
"docid": "66df2a7148d67ffd3aac5fc91e09ee5d",
"text": "Tree boosting, which combines weak learners (typically decision trees) to generate a strong learner, is a highly effective and widely used machine learning method. However, the development of a high performance tree boosting model is a time-consuming process that requires numerous trial-and-error experiments. To tackle this issue, we have developed a visual diagnosis tool, BOOSTVis, to help experts quickly analyze and diagnose the training process of tree boosting. In particular, we have designed a temporal confusion matrix visualization, and combined it with a t-SNE projection and a tree visualization. These visualization components work together to provide a comprehensive overview of a tree boosting model, and enable an effective diagnosis of an unsatisfactory training process. Two case studies that were conducted on the Otto Group Product Classification Challenge dataset demonstrate that BOOSTVis can provide informative feedback and guidance to improve understanding and diagnosis of tree boosting algorithms.",
"title": ""
},
{
"docid": "61f0e20762a8ce5c3c40ea200a32dd43",
"text": "Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general. Keywords—elearning; gamification; marketing; monetization; viral marketing; virality",
"title": ""
},
{
"docid": "353bbc5e68ec1d53b3cd0f7c352ee699",
"text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "5118d816cb2ede5fa19875cbd50cc7d8",
"text": "PURPOSE\nTo review the concepts of reliability and validity, provide examples of how the concepts have been used in nursing research, provide guidance for improving the psychometric soundness of instruments, and report suggestions from editors of nursing journals for incorporating psychometric data into manuscripts.\n\n\nMETHODS\nCINAHL, MEDLINE, and PsycINFO databases were searched using key words: validity, reliability, and psychometrics. Nursing research articles were eligible for inclusion if they were published in the last 5 years, quantitative methods were used, and statistical evidence of psychometric properties were reported. Reports of strong psychometric properties of instruments were identified as well as those with little supporting evidence of psychometric soundness.\n\n\nFINDINGS\nReports frequently indicated content validity but sometimes the studies had fewer than five experts for review. Criterion validity was rarely reported and errors in the measurement of the criterion were identified. Construct validity remains underreported. Most reports indicated internal consistency reliability (alpha) but few reports included reliability testing for stability. When retest reliability was asserted, time intervals and correlations were frequently not included.\n\n\nCONCLUSIONS\nPlanning for psychometric testing through design and reducing nonrandom error in measurement will add to the reliability and validity of instruments and increase the strength of study findings. Underreporting of validity might occur because of small sample size, poor design, or lack of resources. Lack of information on psychometric properties and misapplication of psychometric testing is common in the literature.",
"title": ""
},
{
"docid": "5fb05ef7a15c82c56a222a49a1cc7cf6",
"text": "We describe Analyza, a system that helps lay users explore data. Analyza has been used within two large real world systems. The first is a question-and-answer feature in a spreadsheet product. The second provides convenient access to a revenue/inventory database for a large sales force. Both user bases consist of users who do not necessarily have coding skills, demonstrating Analyza's ability to democratize access to data. We discuss the key design decisions in implementing this system. For instance, how to mix structured and natural language modalities, how to use conversation to disambiguate and simplify querying, how to rely on the ``semantics' of the data to compensate for the lack of syntactic structure, and how to efficiently curate the data.",
"title": ""
},
{
"docid": "7cecfd37e44b26a67bee8e9c7dd74246",
"text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.",
"title": ""
},
{
"docid": "1a819d090746e83676b0fc3ee94fd526",
"text": "Brain-computer interfaces (BCIs) use signals recorded from the brain to operate robotic or prosthetic devices. Both invasive and noninvasive approaches have proven effective. Achieving the speed, accuracy, and reliability necessary for real-world applications remains the major challenge for BCI-based robotic control.",
"title": ""
},
{
"docid": "7e941f9534357fca740b97a99e86f384",
"text": "The head-direction (HD) cells found in the limbic system in freely mov ing rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be controlled accurately by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.",
"title": ""
},
{
"docid": "e0ec89c103aedb1d04fbc5892df288a8",
"text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.",
"title": ""
},
{
"docid": "80a34e1544f9a20d6e1698278e0479b5",
"text": "We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient descent constraint optimisation to provide further control over the generation process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and transferred as constraints to the newly generated material. The sampling process is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.",
"title": ""
},
{
"docid": "36b7b37429a8df82e611df06303a8fcb",
"text": "Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs) – semantic-preserving perturbations that induce changes in the model’s predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs) – simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.",
"title": ""
},
{
"docid": "48a400878a5f1fbc3b7b109aa7e9bd2b",
"text": "Mutation analysis is usually used to provide indication of the fault detection ability of a test set. It is mainly used for unit testing evaluation. This paper describes mutation analysis principles and their adaptation to the Lustre programming language. Alien-V, a mutation tool for Lustre is presented. Lesar modelchecker is used for eliminating equivalent mutant. A first experimentation to evaluate Lutess testing tool is summarized.",
"title": ""
},
{
"docid": "a757624e5fd2d4a364f484d55a430702",
"text": "The main challenge in P2P computing is to design and implement a robust and scalable distributed system composed of inexpensive, individually unreliable computers in unrelated administrative domains. The participants in a typical P2P system might include computers at homes, schools, and businesses, and can grow to several million concurrent participants.",
"title": ""
},
{
"docid": "a0c9d3c2b14395a6d476b12c5e8b28b0",
"text": "Undergraduate research experiences enhance learning and professional development, but providing effective and scalable research training is often limited by practical implementation and orchestration challenges. We demonstrate Agile Research Studios (ARS)---a socio-technical system that expands research training opportunities by supporting research communities of practice without increasing faculty mentoring resources.",
"title": ""
}
] |
scidocsrr
|
10ba715cd3db4f9f338f391d6a0401d7
|
Challenges in 802.11 encryption algorithms: The need for an adaptive scheme for improved performance
|
[
{
"docid": "2ffd4537f9adff88434c8a2b5860b6a5",
"text": "free download the design of rijndael: aes the advanced the design of rijndael aes the advanced encryption publication moved: fips 197, advanced encryption standard rijndael aes paper nist computer security resource the design of rijndael toc beck-shop design and implementation of advanced encryption standard lecture note 4 the advanced encryption standard (aes) selecting the advanced encryption standard implementation of advanced encryption standard (aes implementation of advanced encryption standard algorithm cryptographic algorithms aes cryptography the advanced encryption the successor of des computational and algebraic aspects of the advanced advanced encryption standard security forum 2017 advanced encryption standard 123seminarsonly design of high speed 128 bit aes algorithm for data encryption fpga based implementation of aes encryption and decryption effective comparison and evaluation of des and rijndael advanced encryption standard (aes) and it’s working the long road to the advanced encryption standard fpga implementations of advanced encryption standard a survey a reconfigurable cryptography coprocessor rcc for advanced vlsi design and implementation of pipelined advanced information security and cryptography springer cryptographic algorithms (aes, rsa) polynomials in the nation’s service: using algebra to chapter 19: rijndael: a successor to the data encryption a vlsi architecture for rijndael, the advanced encryption a study of encryption algorithms (rsa, des, 3des and aes design an aes algorithm using s.r & m.c technique alook at the advanced encr yption standard (aes) aes-512: 512-bit advanced encryption standard algorithm some algebraic aspects of the advanced encryption standard global information assurance certification paper design of parallel advanced encryption standard (ae s shared architecture for encryption/decryption of aes iceec2015sp06.pdf an enhanced advanced encryption standard a vhdl implementation of the advanced encryption standard advanced encryption standard ijcset vlsi implementation of enhanced aes cryptography",
"title": ""
}
] |
[
{
"docid": "a33ed384b8f4a86e8cc82970c7074bad",
"text": "There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation.",
"title": ""
},
{
"docid": "e3db113a2b09ee8c7c093e696c85e6bf",
"text": "Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.",
"title": ""
},
{
"docid": "68c31aa73ba8bcc1b3421981877d4310",
"text": "Several approaches are available to create cross-platform applications. The majority of these approaches focus on purely mobile platforms. Their principle is to develop the application once and be able to deploy it to multiple mobile platforms with different operating systems (Android (Java), IOS (Objective C), Windows Phone 7 (C#), etc.). In this article, we propose a merged approach and cross-platform called ZCA \"ZeroCouplage Approach\". Merged to regroup the strong points of approaches: \"Runtime\", \"Component-Based\" and \"Cloud-Based\" thank to a design pattern which we created and named M2VC (Model-Virtual-View-Controller). Cross-platform allows creating a unique application that is deployable directly on many platforms: Web, Mobile and Desktop. In this article, we also compare our ZCA approach with others to approve its added value. Our idea, contrary to mobile approaches, consists of a given technology to implement cross-platform applications. To validate our approach, we have developed an open source framework named ZCF \"ZeroCouplage Framework\" for Java technology.",
"title": ""
},
{
"docid": "24e380a79c5520a4f656ff2177d43dd7",
"text": "a r t i c l e i n f o Social media have increasingly become popular platforms for information dissemination. Recently, companies have attempted to take advantage of social advertising to deliver their advertisements to appropriate customers. The success of message propagation in social media depends greatly on the content relevance and the closeness of social relationships. In this paper, considering the factors of user preference, network influence , and propagation capability, we propose a diffusion mechanism to deliver advertising information over microblogging media. Our experimental results show that the proposed model could provide advertisers with suitable targets for diffusing advertisements continuously and thus efficiently enhance advertising effectiveness. In recent years, social media, such as Facebook, Twitter and Plurk, have flourished and raised much attention. Social media provide users with an excellent platform to share and receive information and give marketers a great opportunity to diffuse information through numerous populations. An overwhelming majority of mar-keters are using social media to market their businesses, and a significant 81% of these marketers indicate that their efforts in social media have generated effective exposure for their businesses [59]. With effective vehicles for understanding customer behavior and new hybrid elements of the promotion mix, social media allow enterprises to make timely contact with the end-consumer at relatively low cost and higher levels of efficiency [52]. Since the World Wide Web (Web) is now the primary message delivering medium between advertisers and consumers, it is a critical issue to find the best way to utilize on-line media for advertising purposes [18,29]. The effectiveness of advertisement distribution highly relies on well understanding the preference information of the targeted users. However, some implicit personal information of users, particularly the new users, may not be always obtainable to the marketers [23]. As users know more about their friends than marketers, the relations between the users become a natural medium and filter for message diffusion. Moreover, most people are willing to share their information with friends and are likely to be affected by the opinions of their friends [35,45]. Social advertising is a kind of recommendation system, of sharing information between friends. It takes advantage of the relation of users to conduct an advertising campaign. In 2010, eMarketer reported that 90% of consumers rely on recommendations from people they trust. In the same time, IDG Amplify indicated that the efficiency of social advertising is greater than the traditional …",
"title": ""
},
{
"docid": "661e9f25abc38bd60f408cefeeb881e1",
"text": "The sirtuins are a highly conserved family of NAD+-dependent enzymes that regulate lifespan in lower organisms. Recently, the mammalian sirtuins have been connected to an ever widening circle of activities that encompass cellular stress resistance, genomic stability, tumorigenesis and energy metabolism. Here we review the recent progress in sirtuin biology, the role these proteins have in various age-related diseases and the tantalizing notion that the activity of this family of enzymes somehow regulates how long we live.",
"title": ""
},
{
"docid": "f4cb0eb6d39c57779cf9aa7b13abef14",
"text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.",
"title": ""
},
{
"docid": "435200b067ebd77f69a04cc490d73fa6",
"text": "Self-mutilation of genitalia is an extremely rare entity, usually found in psychotic patients. Klingsor syndrome is a condition in which such an act is based upon religious delusions. The extent of genital mutilation can vary from superficial cuts to partial or total amputation of penis to total emasculation. The management of these patients is challenging. The aim of the treatment is restoration of the genital functionality. Microvascular reanastomosis of the phallus is ideal but it is often not possible due to the delay in seeking medical attention, non viability of the excised phallus or lack of surgical expertise. Hence, it is not unusual for these patients to end up with complete loss of the phallus and a perineal urethrostomy. We describe a patient with Klingsor syndrome who presented to us with near total penile amputation. The excised phallus was not viable and could not be used. The patient was managed with surgical reconstruction of the penile stump which was covered with loco-regional flaps. The case highlights that a functional penile reconstruction is possible in such patients even when microvascular reanastomosis is not feasible. This technique should be attempted before embarking upon perineal urethrostomy.",
"title": ""
},
{
"docid": "aee62b585bb8a51b7bd9e0835bce72b4",
"text": "Someone said, “It is a bad craftsman that blames his tools.” It should be obvious to the thoughtful observer that the problem may be the implementation of ISD, not a systematic approach itself. At the highest level of a systems approach one cannot imagine a design process that does not identify the training needs of an organization or the learning needs of the students. While learning occurs in many different environments, it is generally agreed that instruction requires that one first identify the goals of the instruction. It is equally difficult to imagine a process that does not involve planning, development, implementation, and evaluation. It is not these essential development activities that are in question but perhaps the fact that their detailed implementation in various incarnations of ISD do not represent the most efficient or effective method for designing instruction. A more significant element is the emphasis on the process involved in developing instruction rather than the basic learning principles that this process should emphasize. Merely following a series of steps, when there is insufficient guidance as to quality, is likely to result in an inferior product. A technology involves not only the steps involved but a set of specifications for what each step is to accomplish. Perhaps many ISD implementations have had insufficient specifications for the products of the process.",
"title": ""
},
{
"docid": "4284e9bbe3bf4c50f9e37455f1118e6b",
"text": "A longevity revolution (Butler, 2008) is occurring across the globe. Because of factors ranging from the reduction of early-age mortality to an increase in life expectancy at later ages, most of the world’s population is now living longer than preceding generations (Bengtson, 2014). There are currently more than 44 million older adults—typically defined as persons 65 years and older—living in the United States, and this number is expected to increase to 98 million by 2060 (Administration on Aging, 2016). Although most older adults report higher levels of life satisfaction than do younger or middle-aged adults (George, 2010), between 5.6 and 8 million older Americans have a diagnosable mental health or substance use disorder (Bartels & Naslund, 2013). Furthermore, because of the rapid growth of the older adult population, this figure is expected to nearly double by 2030 (Bartels & Naslund, 2013). Mental health care is effective for older adults, and evidence-based treatments exist to address a broad range of issues, including anxiety disorders, depression, sleep disturbances, substance abuse, and some symptoms of dementia (Myers & Harper, 2004). Counseling interventions may also be beneficial for nonclinical life transitions, such as coping with loss, adjusting to retirement and a reduced income, and becoming a grandparent (Myers & Harper, 2004). Yet, older adults are underserved when it comes to mental",
"title": ""
},
{
"docid": "c86b01c42f54053acf69c7ea3495c330",
"text": "Opioids are central analgesics that act on the CNS (central nervous system) and PNS (peripheral nervous system). We investigated the effects of codeine (COD) and tramadol (TRAM) on local anesthesia of the sciatic nerve. Eighty Wistar male rats received the following SC injections in the popliteal fossa: local anesthetic with epinephrine (LA); local anesthetic without vasoconstrictor (LA WV); COD; TRAM; LA + COD; LA + TRAM; COD 20 minutes prior to LA (COD 20' + LA) or TRAM 20 minutes prior to LA (TRAM 20' + LA). As a nociceptive function, the blockade was considered the absence of a paw withdraw reflex. As a motor function, it was the absence of claudication. As a proprioceptive function, it was the absence of hopping and tactile responses. All data were compared using repeated-measures analysis of variance (ANOVA). Opioids showed a significant increase in the level of anesthesia, and the blockade duration of LA + COD was greater than that of the remaining groups (p < 0.05). The associated use of opioids improved anesthesia efficacy. This could lead to a new perspective in controlling dental pain.",
"title": ""
},
{
"docid": "b90563b5f6d2b606d335222eb06d0b9a",
"text": "Ensuring differential privacy of models learned from sensitive user data is an important goal that has been studied extensively in recent years. It is now known that for some basic learning problems, especially those involving high-dimensional data, producing an accurate private model requires much more data than learning without privacy. At the same time, in many applications it is not necessary to expose the model itself. Instead users may be allowed to query the prediction model on their inputs only through an appropriate interface. Here we formulate the problem of ensuring privacy of individual predictions and investigate the overheads required to achieve it in several standard models of classification and regression. We first describe a simple baseline approach based on training several models on disjoint subsets of data and using standard private aggregation techniques to predict. We show that this approach has nearly optimal sample complexity for (realizable) PAC learning of any class of Boolean functions. At the same time, without strong assumptions on the data distribution, the aggregation step introduces a substantial overhead. We demonstrate that this overhead can be avoided for the well-studied class of thresholds on a line and for a number of standard settings of convex regression. The analysis of our algorithm for learning thresholds relies crucially on strong generalization guarantees that we establish for all differentially private prediction algorithms.",
"title": ""
},
{
"docid": "f2aff84f10b59cbc127dab6266cee11c",
"text": "This paper extends the Argument Interchange Format to enable it to represent dialogic argumentation. One of the challenges is to tie together the rules expressed in dialogue protocols with the inferential relations between premises and conclusions. The extensions are founded upon two important analogies which minimise the extra ontological machinery required. First, locutions in a dialogue are analogous to AIF Inodes which capture propositional data. Second, steps between locutions are analogous to AIF S-nodes which capture inferential movement. This paper shows how these two analogies combine to allow both dialogue protocols and dialogue histories to be represented alongside monologic arguments in a single coherent system.",
"title": ""
},
{
"docid": "37f5fcde86e30359e678ff3f957e3c7e",
"text": "A Phase I dose-proportionality study is an essential tool to understand drug pharmacokinetic dose-response relationship in early clinical development. There are a number of different approaches to the assessment of dose proportionality. The confidence interval (CI) criteria approach, a staitistically sound and clinically relevant approach, has been proposed to detect dose-proportionality (Smith, et al. 2000), by which the proportionality is declared if the 90% CI for slope is completely contained within the pre-determined critical interval. This method, enhancing the information from a clinical dose-proportionality study, has gradually drawn attention. However, exact power calculation of dose proportinality studies based on CI criteria poses difficulity for practioners since the methodology was essentailly from two one-sided tests (TOST) procedure for the slope, which should be unit under proportionality. It requires sophisticated numerical integration, and it is not available in statistical software packages. This paper presents a SAS Macro to compute the empirical power for the CI-based dose proportinality studies. The resulting sample sizes and corresponding empirical powers suggest that this approach is powerful in detecting dose-proportionality under commonly used sample sizes for phase I studies.",
"title": ""
},
{
"docid": "1d5e363647bd8018b14abfcc426246bb",
"text": "This paper presents a new approach to improve the performance of finger-vein identification systems presented in the literature. The proposed system simultaneously acquires the finger-vein and low-resolution fingerprint images and combines these two evidences using a novel score-level combination strategy. We examine the previously proposed finger-vein identification approaches and develop a new approach that illustrates it superiority over prior published efforts. The utility of low-resolution fingerprint images acquired from a webcam is examined to ascertain the matching performance from such images. We develop and investigate two new score-level combinations, i.e., holistic and nonlinear fusion, and comparatively evaluate them with more popular score-level fusion approaches to ascertain their effectiveness in the proposed system. The rigorous experimental results presented on the database of 6264 images from 156 subjects illustrate significant improvement in the performance, i.e., both from the authentication and recognition experiments.",
"title": ""
},
{
"docid": "0fd635cfbcbd2d648f5c25ce2cb551a5",
"text": "The main focus of relational learning for knowledge graph completion (KGC) lies in exploiting rich contextual information for facts. Many state-of-the-art models incorporate fact sequences, entity types, and even textual information. Unfortunately, most of them do not fully take advantage of rich structural information in a KG, i.e., connectivity patterns around each entity. In this paper, we propose a context-aware convolutional learning (CACL) model which jointly learns from entities and their multi-hop neighborhoods. Since we directly utilize the connectivity patterns contained in each multi-hop neighborhood, the structural role similarity among entities can be better captured, resulting in more informative entity and relation embeddings. Specifically, CACL collects entities and relations from the multi-hop neighborhood as contextual information according to their relative importance and uniquely maps them to a linear vector space. Our convolutional architecture leverages a deep learning technique to represent each entity along with its linearly mapped contextual information. Thus, we can elaborately extract the features of key connectivity patterns from the context and incorporate them into a score function which evaluates the validity of facts. Experimental results on the newest datasets show that CACL outperforms existing approaches by successfully enriching embeddings with neighborhood information.",
"title": ""
},
{
"docid": "76afcc3dfbb06f2796b61c8b5b424ad8",
"text": "Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results.",
"title": ""
},
{
"docid": "242b854de904075d04e7044e680dc281",
"text": "Adopting a motivational perspective on adolescent development, these two companion studies examined the longitudinal relations between early adolescents' school motivation (competence beliefs and values), achievement, emotional functioning (depressive symptoms and anger), and middle school perceptions using both variable- and person-centered analytic techniques. Data were collected from 1041 adolescents and their parents at the beginning of seventh and the end of eight grade in middle school. Controlling for demographic factors, regression analyses in Study 1 showed reciprocal relations between school motivation and positive emotional functioning over time. Furthermore, adolescents' perceptions of the middle school learning environment (support for competence and autonomy, quality of relationships with teachers) predicted their eighth grade motivation, achievement, and emotional functioning after accounting for demographic and prior adjustment measures. Cluster analyses in Study 2 revealed several different patterns of school functioning and emotional functioning during seventh grade that were stable over 2 years and that were predictably related to adolescents' reports of their middle school environment. Discussion focuses on the developmental significance of schooling for multiple adjustment outcomes during adolescence.",
"title": ""
},
{
"docid": "0f613e9c6d2a6ca47d5ed0e6b853735e",
"text": "We introduce a novel approach for automatically classifying the sentiment of Twitter messages. These messages are classified as either positive or negative with respect to a query term. This is useful for consumers who want to research the sentiment of products before purchase, or companies that want to monitor the public sentiment of their brands. There is no previous research on classifying sentiment of messages on microblogging services like Twitter. We present the results of machine learning algorithms for classifying the sentiment of Twitter messages using distant supervision. Our training data consists of Twitter messages with emoticons, which are used as noisy labels. This type of training data is abundantly available and can be obtained through automated means. We show that machine learning algorithms (Naive Bayes, Maximum Entropy, and SVM) have accuracy above 80% when trained with emoticon data. This paper also describes the preprocessing steps needed in order to achieve high accuracy. The main contribution of this paper is the idea of using tweets with emoticons for distant supervised learning.",
"title": ""
},
{
"docid": "2d2465aff21421330f82468858a74cf4",
"text": "There has been a tremendous increase in popularity and adoption of wearable fitness trackers. These fitness trackers predominantly use Bluetooth Low Energy (BLE) for communicating and syncing the data with user's smartphone. This paper presents a measurement-driven study of possible privacy leakage from BLE communication between the fitness tracker and the smartphone. Using real BLE traffic traces collected in the wild and in controlled experiments, we show that majority of the fitness trackers use unchanged BLE address while advertising, making it feasible to track them. The BLE traffic of the fitness trackers is found to be correlated with the intensity of user's activity, making it possible for an eavesdropper to determine user's current activity (walking, sitting, idle or running) through BLE traffic analysis. Furthermore, we also demonstrate that the BLE traffic can represent user's gait which is known to be distinct from user to user. This makes it possible to identify a person (from a small group of users) based on the BLE traffic of her fitness tracker. As BLE-based wearable fitness trackers become widely adopted, our aim is to identify important privacy implications of their usage and discuss prevention strategies.",
"title": ""
},
{
"docid": "2cacc319693079eb420c51f602dc45ec",
"text": "We provide code that produces beautiful poetry. Our sonnet-generation algorithm includes several novel elements that improve over the state-of-the-art, leading to rhythmic and inspiring poems. The work discussed here is the winner of the 2018 PoetiX Literary Turing Test Award for computer-generated poetry.",
"title": ""
}
] |
scidocsrr
|
2f8fce164cb4453cf5498b7b0275792f
|
Accelerating Convolutional Neural Networks for Mobile Applications
|
[
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
},
{
"docid": "26dac00bc328dc9c8065ff105d1f8233",
"text": "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.",
"title": ""
}
] |
[
{
"docid": "69f597aac301a492892354dd593a4355",
"text": "The influence of user generated content on e-commerce websites and social media has been addressed in both practical and theoretical fields. Since most previous studies focus on either electronic word of mouth (eWOM) from e-commerce websites (EC-eWOM) or social media (SM-eWOM), little is known about the adoption process when consumers are presented EC-eWOM and SM-eWOM simultaneously. We focus on this problem by considering their adoption as an interactive process. It clarifies the mechanism of consumer’s adoption for those from the perspective of cognitive cost theory. A conceptual model is proposed about the relationship between the adoptions of the two types of eWOM. The empirical analysis shows that EC-eWOM’s usefulness and credibility positively influence the adoption of EC-eWOM, but negatively influence that of SM-eWOM. EC-eWOM adoption negatively impacts SM-eWOM adoption, and mediates the relationship between usefulness, credibility and SM-eWOM adoption. The moderating effects of consumers’ cognitive level and degree of involvement are also discussed. This paper further explains the adoption of the two types of eWOM based on the cognitive cost theory and enriches the theoretical research about eWOM in the context of social commerce. Implications for practice, as well as suggestions for future research, are also discussed. 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d1a8e3a67181cd43429a98dc38affd35",
"text": "Deep belief nets (DBNs) with multiple artificial neural networks (ANNs) have attracted many researchers recently. In this paper, we propose to compose restricted Boltzmann machine (RBM) and multi-layer perceptron (MLP) as a DBN to predict chaotic time series data, such as the Lorenz chaos and the Henon map. Experiment results showed that in the sense of prediction precision, the novel DBN performed better than the conventional DBN with RBMs.",
"title": ""
},
{
"docid": "a36944b193ca1b2423010017b08d5d2c",
"text": "Hand washing is a critical activity in preventing the spread of infection in health-care environments and food preparation areas. Several guidelines recommended a hand washing protocol consisting of six steps that ensure that all areas of the hands are thoroughly cleaned. In this paper, we describe a novel approach that uses a computer vision system to measure the user’s hands motions to ensure that the hand washing guidelines are followed. A hand washing quality assessment system needs to know if the hands are joined or separated and it has to be robust to different lighting conditions, occlusions, reflections and changes in the color of the sink surface. This work presents three main contributions: a description of a system which delivers robust hands segmentation using a combination of color and motion analysis, a single multi-modal particle filter (PF) in combination with a k-means-based clustering technique to track both hands/arms, and the implementation of a multi-class classification of hand gestures using a support vector machine ensemble. PF performance is discussed and compared with a standard Kalman filter estimator. Finally, the global performance of the system is analyzed and compared with human performance, showing an accuracy close to that of human experts.",
"title": ""
},
{
"docid": "d10c17324f8f6d4523964f10bc689d8e",
"text": "This article studied a novel Log-Periodic Dipole Antenna (LPDA) with distributed inductive load for size reduction. By adding a short circuit stub at top of the each element, the dimensions of the LPDA are reduced by nearly 50% compared to the conventional one. The impedance bandwidth of the presented antenna is nearly 122% (54~223MHz) (S11<;10dB), and this antenna is very suited for BROADCAST and TV applications.",
"title": ""
},
{
"docid": "cf1720877ddc4400bdce2a149b5ec8b4",
"text": "How do we find patterns in author-keyword associations, evolving over time? Or in data cubes (tensors), with product-branchcustomer sales information? And more generally, how to summarize high-order data cubes (tensors)? How to incrementally update these patterns over time? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks, and many more settings. However, they have only two orders (i.e., matrices, like author and keyword in the previous example).\n We propose to envision such higher-order data as tensors, and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce a general framework, incremental tensor analysis (ITA), which efficiently computes a compact summary for high-order and high-dimensional data, and also reveals the hidden correlations. Three variants of ITA are presented: (1) dynamic tensor analysis (DTA); (2) streaming tensor analysis (STA); and (3) window-based tensor analysis (WTA). In paricular, we explore several fundamental design trade-offs such as space efficiency, computational cost, approximation accuracy, time dependency, and model complexity.\n We implement all our methods and apply them in several real settings, such as network anomaly detection, multiway latent semantic indexing on citation networks, and correlation study on sensor measurements. Our empirical studies show that the proposed methods are fast and accurate and that they find interesting patterns and outliers on the real datasets.",
"title": ""
},
{
"docid": "173d791e05859ec4cc28b9649c414c62",
"text": "Breast cancer is the most common invasive cancer in females worldwide. It usually presents with a lump in the breast with or without other manifestations. Diagnosis of breast cancer depends on physical examination, mammographic findings and biopsy results. Treatment of breast cancer depends on the stage of the disease. Lines of treatment include mainly surgical removal of the tumor followed by radiotherapy or chemotherapy. Other lines including immunotherapy, thermochemotherapy and alternative medicine may represent a hope for breast cancer",
"title": ""
},
{
"docid": "8069999c95b31e8c847091f72b694af7",
"text": "Software defined radio (SDR) is a rapidly evolving technology which implements some functional modules of a radio system in software executing on a programmable processor. SDR provides a flexible mechanism to reconfigure the radio, enabling networked devices to easily adapt to user preferences and the operating environment. However, the very mechanisms that provide the ability to reconfigure the radio through software also give rise to serious security concerns such as unauthorized modification of the software, leading to radio malfunction and interference with other users' communications. Both the SDR device and the network need to be protected from such malicious radio reconfiguration.\n In this paper, we propose a new architecture to protect SDR devices from malicious reconfiguration. The proposed architecture is based on robust separation of the radio operation environment and user application environment through the use of virtualization. A secure radio middleware layer is used to intercept all attempts to reconfigure the radio, and a security policy monitor checks the target configuration against security policies that represent the interests of various parties. Therefore, secure reconfiguration can be ensured in the radio operation environment even if the operating system in the user application environment is compromised. We have prototyped the proposed secure SDR architecture using VMware and the GNU Radio toolkit, and demonstrate that the overheads incurred by the architecture are small and tolerable. Therefore, we believe that the proposed solution could be applied to address SDR security concerns in a wide range of both general-purpose and embedded computing systems.",
"title": ""
},
{
"docid": "380fdee23bebf16b05ce7caebd6edac4",
"text": "Automatic detection of emotions has been evaluated using standard Mel-frequency Cepstral Coefficients, MFCCs, and a variant, MFCC-low, calculated between 20 and 300 Hz, in order to model pitch. Also plain pitch features have been used. These acoustic features have all been modeled by Gaussian mixture models, GMMs, on the frame level. The method has been tested on two different corpora and languages; Swedish voice controlled telephone services and English meetings. The results indicate that using GMMs on the frame level is a feasible technique for emotion classification. The two MFCC methods have similar performance, and MFCC-low outperforms the pitch features. Combining the three classifiers significantly improves performance.",
"title": ""
},
{
"docid": "9e0f3f1ec7b54c5475a0448da45e4463",
"text": "Significant effort has been devoted to designing clustering algorithms that are responsive to user feedback or that incorporate prior domain knowledge in the form of constraints. However, users desire more expressive forms of interaction to influence clustering outcomes. In our experiences working with diverse application scientists, we have identified an interaction style scatter/gather clustering that helps users iteratively restructure clustering results to meet their expectations. As the names indicate, scatter and gather are dual primitives that describe whether clusters in a current segmentation should be broken up further or, alternatively, brought back together. By combining scatter and gather operations in a single step, we support very expressive dynamic restructurings of data. Scatter/gather clustering is implemented using a nonlinear optimization framework that achieves both locality of clusters and satisfaction of user-supplied constraints. We illustrate the use of our scatter/gather clustering approach in a visual analytic application to study baffle shapes in the bat biosonar (ears and nose) system. We demonstrate how domain experts are adept at supplying scatter/gather constraints, and how our framework incorporates these constraints effectively without requiring numerous instance-level constraints.",
"title": ""
},
{
"docid": "2eba831751ae88cfb69b7c4463df438a",
"text": "ÐSoftware engineers use a number of different types of software development technical review (SDTR) for the purpose of detecting defects in software products. This paper applies the behavioral theory of group performance to explain the outcomes of software reviews. A program of empirical research is developed, including propositions to both explain review performance and identify ways of improving review performance based on the specific strengths of individuals and groups. Its contributions are to clarify our understanding of what drives defect detection performance in SDTRs and to set an agenda for future research. In identifying individuals' task expertise as the primary driver of review performance, the research program suggests specific points of leverage for substantially improving review performance. It points to the importance of understanding software reading expertise and implies the need for a reconsideration of existing approaches to managing reviews. Index TermsÐInspections, walkthroughs, technical reviews, defects, defect detection, groups, group process, group size, expertise, reading, training, behavioral research, theory, research program.",
"title": ""
},
{
"docid": "d71ac31768bf1adb80a8011360225443",
"text": "Person re-identification has recently attracted a lot of attention in the computer vision community. This is in part due to the challenging nature of matching people across cameras with different viewpoints and lighting conditions, as well as across human pose variations. The literature has since devised several approaches to tackle these challenges, but the vast majority of the work has been concerned with appearance-based methods. We propose an approach that goes beyond appearance by integrating a semantic aspect into the model. We jointly learn a discriminative projection to a joint appearance-attribute subspace, effectively leveraging the interaction between attributes and appearance for matching. Our experimental results support our model and demonstrate the performance gain yielded by coupling both tasks. Our results outperform several state-of-the-art methods on VIPeR, a standard re-identification dataset. Finally, we report similar results on a new large-scale dataset we collected and labeled for our task.",
"title": ""
},
{
"docid": "15cd1e8dba20cbcfd10a1f1b926a5f63",
"text": "Decision analysis can be defined as a set of systematic procedures for analysing complex decision problems. Differences between the desired and the actual state of real world geographical system is a spatial decision problem, which can be approached systematically by means of multi-criteria decision making. Many real-world spatially related problems give rise to geographical information system based multi-criteria decision making. Geographical information systems and multi-criteria decision making have developed largely independently, but a trend towards the exploration of their synergies is now emerging. This paper discusses the synergistic role of multi-criteria decisions in geographical information systems and the use of geographical information systems in multi-attribute decision analysis. An example is provided of analysis of land use suitability by use of either weighted linear combination methods or ordered weighting averages.",
"title": ""
},
{
"docid": "793435bef5fd93d7f58b52269fcbb839",
"text": "Learning automatically the structure of object categories remains an important open problem in computer vision. In this paper, we propose a novel unsupervised approach that can discover and learn landmarks in object categories, thus characterizing their structure. Our approach is based on factorizing image deformations, as induced by a viewpoint change or an object deformation, by learning a deep neural network that detects landmarks consistently with such visual effects. Furthermore, we show that the learned landmarks establish meaningful correspondences between different object instances in a category without having to impose this requirement explicitly. We assess the method qualitatively on a variety of object types, natural and man-made. We also show that our unsupervised landmarks are highly predictive of manually-annotated landmarks in face benchmark datasets, and can be used to regress these with a high degree of accuracy.",
"title": ""
},
{
"docid": "a2247241882074e5d27a3c3bbbde5936",
"text": "As scientific computation continues to scale, it is crucial to use floating-point arithmetic processors as efficiently as possible. Lower precision allows streaming architectures to perform more operations per second and can reduce memory bandwidth pressure on all architectures. However, using a precision that is too low for a given algorithm and data set will result in inaccurate results. Thus, developers must balance speed and accuracy when choosing the floating-point precision of their subroutines and data structures. I am investigating techniques to help developers learn about the runtime floating-point behavior of their programs, and to help them make decisions concerning the choice of precision in implementation. I propose to develop methods that will generate floating-point precision configurations, automatically testing and validating them using binary instrumentation. The goal is ultimately to make a recommendation to the developer regarding which parts of the program can be reduced to single-precision. The central thesis is that automated analysis techniques can make recommendations regarding the precision levels that each part of a computer program must use to maintain overall accuracy, with the goal of improving performance on scientific codes.",
"title": ""
},
{
"docid": "0059c0b90c2ab8729ca98569be74a3dc",
"text": "This paper describes the STAC resource, a corpus of multi-party chats annotated for discourse structure in the style of SDRT (Asher and Lascarides, 2003; Lascarides and Asher, 2009). The main goal of the STAC project is to study the discourse structure of multi-party dialogues in order to understand the linguistic strategies adopted by interlocutors to achieve their conversational goals, especially when these goals are opposed. The STAC corpus is not only a rich source of data on strategic conversation, but also the first corpus that we are aware of that provides full discourse structures for multi-party dialogues. It has other remarkable features that make it an interesting resource for other topics: interleaved threads, creative language, and interactions between linguistic and extra-linguistic contexts.",
"title": ""
},
{
"docid": "699e0a10b29fad7d259cd781457462c4",
"text": "Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual metaphor to depict software structure changes with techniques that emphasize both following unchanged code as well as detecting and highlighting important events such as code drift, splits, merges, insertions and deletions. The method is illustrated with the analysis of a real-world C++ code system.",
"title": ""
},
{
"docid": "4e8f7fdba06ae7973e3d25cf35399aaf",
"text": "Endometriosis is a benign and common disorder that is characterized by ectopic endometrium outside the uterus. Extrapelvic endometriosis, like of the vulva, is rarely seen. We report a case of a 47-year-old woman referred to our clinic due to complaints of a vulvar mass and periodic swelling of the mass at the time of menstruation. During surgery, the cyst ruptured and a chocolate-colored liquid escaped onto the surgical field. The cyst was extirpated totally. Hipstopathological examination showed findings compatible with endometriosis. She was asked to follow-up after three weeks. The patient had no complaints and the incision field was clear at the follow-up.",
"title": ""
},
{
"docid": "09a3836f9dd429b6820daf3d2c9b2944",
"text": "Students attendance in the classroom is very important task and if taken manually wastes a lot of time. There are many automatic methods available for this purpose i.e. biometric attendance. All these methods also waste time because students have to make a queue to touch their thumb on the scanning device. This work describes the efficient algorithm that automatically marks the attendance without human intervention. This attendance is recorded by using a camera attached in front of classroom that is continuously capturing images of students, detect the faces in images and compare the detected faces with the database and mark the attendance. The paper review the related work in the field of attendance system then describes the system architecture, software algorithm and results.",
"title": ""
},
{
"docid": "e0092f7964604f7adbe9f010bbac4871",
"text": "In the last decade, Web 2.0 services such as blogs, tweets, forums, chats, email etc. have been widely used as communication media, with very good results. Sharing knowledge is an important part of learning and enhancing skills. Furthermore, emotions may affect decisionmaking and individual behavior. Bitcoin, a decentralized electronic currency system, represents a radical change in financial systems, attracting a large number of users and a lot of media attention. In this work, we investigated if the spread of the Bitcoin’s price is related to the volumes of tweets or Web Search media results. We compared trends of price with Google Trends data, volume of tweets and particularly with those that express a positive sentiment. We found significant cross correlation values, especially between Bitcoin price and Google Trends data, arguing our initial idea based on studies about trends in stock and goods market.",
"title": ""
},
{
"docid": "c16ff028e77459867eed4c2b9c1f44c6",
"text": "Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model (Yu 2013), is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer’s disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts. Introduction Neuroimage analysis is challenging due to its high feature dimensionality and data scarcity. Sparse models such as the lasso (Tibshirani 1996) have gained great reputation in statistics and machine learning, and they have been applied to the analysis of such high dimensional data by exploiting the sparsity property in the absence of abundant data. As a major result, automatic selection of relevant variables/features by such sparse formulation achieves promising performance. For example, in (Liu, Zhang, and Shen 2012), the lasso model was applied to the diagnosis of Alzheimer’s disease (AD) and showed better performance than the support vector machine (SVM), which is one of the state-of-the-arts in brain image classification. However, in statistics, it is known that the lasso does not always provide interpretable results because of its instability (Yu 2013). “Stability” here means the reproducibility of statistical results subject to reasonable perturbations to data and Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the model. (These perturbations include the often used Jacknife, bootstrap and cross-validation.) This unstable behavior of the lasso model is critical in high dimensional data analysis. The resulting irreproducibility of the feature selection are especially undesirable in neuroimage analysis/diagnosis. However, unlike the problems such as registration and classification, the stability issue of feature selection is much less studied in this field. In this paper we propose a model to induce more stable feature selection from high dimensional brain structural Magnetic Resonance Imaging (sMRI) images. Besides sparsity, the proposed model harnesses two important additional pathological priors in brain sMRI: (i) the spatial cohesion of lesion voxels (via inducing fusion terms) and (ii) the positive correlation between the features and the disease labels. The correlation prior is based on the observation that in many brain image analysis problems (such as AD, frontotemporal dementia, corticobasal degeneration, etc), there exist strong correlations between the features and the labels. For example, gray matter of AD is degenerated/atrophied. Therefore, the gray matter values (indicating the volume) are positively correlated with the cognitive scores or disease labels {-1,1}. That is, the less gray matter, the lower the cognitive score. Accordingly, we propose nonnegative constraints on the variables to enforce the prior and name the model as “non-negative Generalized Fused Lasso” (nGFL). It extends the popular generalized fused lasso and enables it to explore the intrinsic structure of data via selecting stable features. To measure feature stability, we introduce the “Estimation Stability” recently proposed in (Yu 2013) and the (multi-set) Dice coefficient (Dice 1945). Experiments demonstrate that compared with existing models, our model selects much more stable (and pathological-prior consistent) voxels. It is worth mentioning that the non-negativeness per se is a very important prior of many practical problems, e.g. (Lee and Seung 1999). Although nGFL is proposed to solve the diagnosis of AD in this work, the model can be applied to more general problems. Incorporating these priors makes the problem novel w.r.t the lasso or generalized fused lasso from an optimization standpoint. Although off-the-shelf convex solvers such as CVX (Grant and Boyd 2013) can be applied to solve the optimization, it hardly scales to high-dimensional problems in feasible time. In this regard, we propose an efficient algoProceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"title": ""
}
] |
scidocsrr
|
71d1218f9c419357a4dc04b12d2460bb
|
BitShred: feature hashing malware for scalable triage and semantic analysis
|
[
{
"docid": "a1c82e67868ef3426896cdb541371d79",
"text": "Executable packing is the most common technique used by computer virus writers to obfuscate malicious code and evade detection by anti-virus software. Universal unpackers have been proposed that can detect and extract encrypted code from packed executables, therefore potentially revealing hidden viruses that can then be detected by traditional signature-based anti-virus software. However, universal unpackers are computationally expensive and scanning large collections of executables looking for virus infections may take several hours or even days. In this paper we apply pattern recognition techniques for fast detection of packed executables. The objective is to efficiently and accurately distinguish between packed and non-packed executables, so that only executables detected as packed will be sent to an universal unpacker, thus saving a significant amount of processing time. We show that our system achieves very high detection accuracy of packed executables with a low average processing time.",
"title": ""
},
{
"docid": "2f2291baa6c8a74744a16f27df7231d2",
"text": "Malicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistencies Škodlivé programy, jako viry a červy (malware), jsou zřídka psány narychlo, jen tak. Obvykle jsou výsledkem svých evolučních vztahů. Zjištěním těchto vztahů a tvorby v přesné fylogenezi se předpokládá užitečná pomoc v analýze nového malware a ve vytvoření zásad pojmenovacího schématu. Porovnávání permutací kódu uvnitř malware mů že nabídnout výhody pro fylogenní generování, protože evoluční kroky implementované autory malware nemohou uchovat posloupnosti ve sdíleném kódu. Popisujeme rodinu fylogenních generátorů, které provádějí clustering pomocí PQ stromově založených extrakčních vlastností. Byl vykonán experiment v němž výstup stromu z těchto generátorů byl vyhodnocen vzhledem k fylogenezím generovaným pomocí vážených n-gramů. Výsledky ukazují výhody přístupu založeného na permutacích ve fylogenním generování malware. Les codes malveillants, tels que les virus et les vers, sont rarement écrits de zéro; en conséquence, il existe des relations de nature évolutive entre ces différents codes. Etablir ces relations et construire une phylogénie précise permet d’espérer une meilleure capacité d’analyse de nouveaux codes malveillants et de disposer d’une méthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles d’être très intéressante dans l’établissement d’une phylogénie, dans la mesure où les étapes évolutives réalisées par les auteurs de codes malveillants ne conservent généralement pas l’ordre des instructions présentes dans le code commun. Nous décrivons ici une famille de générateurs phylogénétiques réalisant des regroupements à l’aide de caractéristiques extraites d’arbres PQ. Une expérience a été réalisée, dans laquelle l’arbre produit par ces générateurs est évalué d’une part en le comparant avec les classificiations de références utilisées par les antivirus par scannage, et d’autre part en le comparant aux phylogénies produites à l’aide de polygrammes de taille n (n-grammes), pondérés. Les résultats démontrent l’intérêt de l’approche utilisant les permutations dans la génération phylogénétique des codes malveillants. Haitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tämän seurauksena niistä on löydettävissä evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien löytämisellä sekä rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sekä toteuttaa nimeämiskäytäntöjä. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionääriset askeleet eivät välttämättä säilytä jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin käyttämällä PQ-puuhun perustuvia ominaisuuksia. Teimme myös kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sekä evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, että permutaatioon perustuvaa lähestymistapaa voidaan menestyksekkäästi käyttää evoluutiomallien muodostamineen. Maliziöse Programme, wie z.B. Viren und Würmer, werden nur in den seltensten Fällen komplett neu geschrieben; als Ergebnis können zwischen verschiedenen maliziösen Codes Abhängigkeiten gefunden werden. Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziöser Codes kann es sehr hilfreich erweisen, Abhängigkeiten zu bestehenden maliziösen Codes darzulegen und somit einen Stammbaum zu erstellen. In dem Artikel wird u.a. auf moderne Ansätze innerhalb der Staumbaumgenerierung anhand ausgewählter Win32 Viren eingegangen. I programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puo’aiutare sia nell’analisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo’ dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita’ del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento l’albero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dell’approccio basato su permutazioni nella generazione delle filogenie del malware. ",
"title": ""
}
] |
[
{
"docid": "a82a4d82b2713e0fe0a562ac09d40fef",
"text": "The advent of new cryptographic methods in recent years also includes schemes related to functional encryption. Within these schemes Attribute-based Encryption (ABE) became the most popular, including ciphertext-policy and key-policy ABE. ABE and related schemes are widely discussed within the mathematical community. Unfortunately, there are only a few implementations circulating within the computer science and the applied cryptography community. Hence, it is very difficult to include these new cryptographic methods in real-world applications. This article gives an overview of existing implementations and elaborates on their value in specific cloud computing and IoT application scenarios. This also includes a summary of the additions the authors made to current implementations such as the introduction of dynamic attributes. Keywords—Attribute-based Encryption, Applied Cryptography, Internet of Things, Cloud Computing Security",
"title": ""
},
{
"docid": "4625d09122eb2e42a201503405f7abfa",
"text": "OBJECTIVE\nTo summarize 16 years of National Collegiate Athletic Association (NCAA) injury surveillance data for 15 sports and to identify potential modifiable risk factors to target for injury prevention initiatives.\n\n\nBACKGROUND\nIn 1982, the NCAA began collecting standardized injury and exposure data for collegiate sports through its Injury Surveillance System (ISS). This special issue reviews 182 000 injuries and slightly more than 1 million exposure records captured over a 16-year time period (1988-1989 through 2003-2004). Game and practice injuries that required medical attention and resulted in at least 1 day of time loss were included. An exposure was defined as 1 athlete participating in 1 practice or game and is expressed as an athlete-exposure (A-E).\n\n\nMAIN RESULTS\nCombining data for all sports, injury rates were statistically significantly higher in games (13.8 injuries per 1000 A-Es) than in practices (4.0 injuries per 1000 A-Es), and preseason practice injury rates (6.6 injuries per 1000 A-Es) were significantly higher than both in-season (2.3 injuries per 1000 A-Es) and postseason (1.4 injuries per 1000 A-Es) practice rates. No significant change in game or practice injury rates was noted over the 16 years. More than 50% of all injuries were to the lower extremity. Ankle ligament sprains were the most common injury over all sports, accounting for 15% of all reported injuries. Rates of concussions and anterior cruciate ligament injuries increased significantly (average annual increases of 7.0% and 1.3%, respectively) over the sample period. These trends may reflect improvements in identification of these injuries, especially for concussion, over time. Football had the highest injury rates for both practices (9.6 injuries per 1000 A-Es) and games (35.9 injuries per 1000 A-Es), whereas men's baseball had the lowest rate in practice (1.9 injuries per 1000 A-Es) and women's softball had the lowest rate in games (4.3 injuries per 1000 A-Es).\n\n\nRECOMMENDATIONS\nIn general, participation in college athletics is safe, but these data indicate modifiable factors that, if addressed through injury prevention initiatives, may contribute to lower injury rates in collegiate sports.",
"title": ""
},
{
"docid": "1d3eb22e6f244fbe05d0cc0f7ee37b84",
"text": "Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when queried with inputs that are very different from their training data. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the prior behavior in novel environments so that it can safely collect additional training data and continually improve. A video illustrating our approach is available at: http://groups.csail.mit.edu/rrg/videos/safe visual navigation.",
"title": ""
},
{
"docid": "aa47f1becff6cb2d4b97f81db6ff598a",
"text": "Communication is a critical factor for the big multi-agent world to stay organized and productive. Typically, most previous multi-agent “learning-to-communicate” studies try to predefine the communication protocols or use technologies such as tabular reinforcement learning and evolutionary algorithm, which cannot generalize to the changing environment or large collection of agents directly. In this paper, we propose an Actor-Coordinator-Critic Net (ACCNet) framework for solving multi-agent “learning-to-communicate” problem. The ACCNet naturally combines the powerful actor-critic reinforcement learning technology with deep learning technology. It can learn the communication protocols even from scratch under partially observable environments. We demonstrate that the ACCNet can achieve better results than several baselines under both continuous and discrete action space environments. We also analyse the learned protocols and discuss some design considerations.",
"title": ""
},
{
"docid": "45d49bbbc2d763effed6c7dc03ee3ce4",
"text": "IMPORTANCE\nDespite research showing no link between the measles-mumps-rubella (MMR) vaccine and autism spectrum disorders (ASD), beliefs that the vaccine causes autism persist, leading to lower vaccination levels. Parents who already have a child with ASD may be especially wary of vaccinations.\n\n\nOBJECTIVE\nTo report ASD occurrence by MMR vaccine status in a large sample of US children who have older siblings with and without ASD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nA retrospective cohort study using an administrative claims database associated with a large commercial health plan. Participants included children continuously enrolled in the health plan from birth to at least 5 years of age during 2001-2012 who also had an older sibling continuously enrolled for at least 6 months between 1997 and 2012.\n\n\nEXPOSURES\nMMR vaccine receipt (0, 1, 2 doses) between birth and 5 years of age.\n\n\nMAIN OUTCOMES AND MEASURES\nASD status defined as 2 claims with a diagnosis code in any position for autistic disorder or other specified pervasive developmental disorder (PDD) including Asperger syndrome, or unspecified PDD (International Classification of Diseases, Ninth Revision, Clinical Modification 299.0x, 299.8x, 299.9x).\n\n\nRESULTS\nOf 95,727 children with older siblings, 994 (1.04%) were diagnosed with ASD and 1929 (2.01%) had an older sibling with ASD. Of those with older siblings with ASD, 134 (6.9%) had ASD, vs 860 (0.9%) children with unaffected siblings (P < .001). MMR vaccination rates (≥1 dose) were 84% (n = 78,564) at age 2 years and 92% (n = 86,063) at age 5 years for children with unaffected older siblings, vs 73% (n = 1409) at age 2 years and 86% (n = 1660) at age 5 years for children with affected siblings. MMR vaccine receipt was not associated with an increased risk of ASD at any age. For children with older siblings with ASD, at age 2, the adjusted relative risk (RR) of ASD for 1 dose of MMR vaccine vs no vaccine was 0.76 (95% CI, 0.49-1.18; P = .22), and at age 5, the RR of ASD for 2 doses compared with no vaccine was 0.56 (95% CI, 0.31-1.01; P = .052). For children whose older siblings did not have ASD, at age 2, the adjusted RR of ASD for 1 dose was 0.91 (95% CI, 0.67-1.20; P = .50) and at age 5, the RR of ASD for 2 doses was 1.12 (95% CI, 0.78-1.59; P = .55).\n\n\nCONCLUSIONS AND RELEVANCE\nIn this large sample of privately insured children with older siblings, receipt of the MMR vaccine was not associated with increased risk of ASD, regardless of whether older siblings had ASD. These findings indicate no harmful association between MMR vaccine receipt and ASD even among children already at higher risk for ASD.",
"title": ""
},
{
"docid": "a0787399eaca5b59a87ed0644da10fc6",
"text": "This work faces the problem of combining the outputs of two co-siting BTS, one operating with 2G networks and the other with 3G (or 4G) networks. This requirement is becoming more and more frequent because many operators, for increasing the capacity for data and voice signal transmission, have overlaid the new network in 3G or 4G technology to the existing 2G infrastructure. The solution here proposed is constituted by a low loss combiner realized through a directional double single-sided filtering system, which manages both TX and RX signals from each BTS output. The design approach for the combiner architecture is described with a particular emphasis on the synthesis of the double single-sided filters (realized by means of extracted pole technique). A prototype of the low-loss combiner has been designed and fabricated for validating the proposed approach. The results obtained are here discussed making into evidence the pros & cons of the proposed solution.",
"title": ""
},
{
"docid": "1145885cd444570248bdbbe163df6bfa",
"text": "Mobile edge computing (a.k.a. fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in renewable-powered mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run- time performance when compared to standard reinforcement learning algorithms such as Q- learning.",
"title": ""
},
{
"docid": "9891cd761ca163395972d10624ddf6e4",
"text": "In this work, we introduce a Hierarchical Generative Model (HGM) to enable realistic forward eye image synthesis, as well as effective backward eye gaze estimation. The proposed HGM consists of a hierarchical generative shape model (HGSM), and a conditional bidirectional generative adversarial network (c-BiGAN). The HGSM encodes eye geometry knowledge and relates eye gaze with eye shape, while c-BiGAN leverages on big data and captures the dependency between eye shape and eye appearance. As an intermediate component, eye shape connects knowledge-based model (HGSM) with data-driven model (c-BiGAN) and enables bidirectional inference. Through a top-down inference, the HGM can synthesize eye images consistent with the given eye gaze. Through a bottom-up inference, HGM can infer eye gaze effectively from a given eye image. Qualitative and quantitative evaluations on benchmark datasets demonstrate our model's effectiveness on both eye image synthesis and eye gaze estimation. In addition, the proposed model is not restricted to eye images only. It can be adapted to face images and any shape-appearance related fields.",
"title": ""
},
{
"docid": "e99c12645fd14528a150f915b3849c2b",
"text": "Teaching in the cyberspace classroom requires moving beyond old models of. pedagogy into new practices that are more facilitative. It involves much more than simply taking old models of pedagogy and transferring them to a different medium. Unlike the face-to-face classroom, in online distance education, attention needs to be paid to the development of a sense of community within the group of participants in order for the learning process to be successful. The transition to the cyberspace classroom can be successfully achieved if attention is paid to several key areas. These include: ensuring access to and familiarity with the technology in use; establishing guidelines and procedures which are relatively loose and free-flowing, and generated with significant input from participants; striving to achieve maximum participation and \"buy-in\" from the participants; promoting collaborative learning; and creating a double or triple loop in the learning process to enable participants to reflect on their learning process. All of these practices significantly contribute to the development of an online learning community, a powerful tool for enhancing the learning experience. Each of these is reviewed in detail in the paper. (AEF) Reproductions supplied by EDRS are the best that can be made from the original document. Making the Transition: Helping Teachers to Teach Online Rena M. Palloff, Ph.D. Crossroads Consulting Group and The Fielding Institute Alameda, CA",
"title": ""
},
{
"docid": "d23577aea49aa71fc3abc8e21b8a2394",
"text": "A publish/subscribe (PS) model is an event-driven model of a distributed system. In this paper, we consider a peer-to-peer (P2P) type of PS model where each peer (process) can publish and subscribe events. Here, a peer publishes an event message and then the event message is notified to a target peer which is interested in the event. Publications and subscriptions are specified in terms of topics as discussed in topic-based PS systems. In this paper, we newly discuss a topic-based access control (TBAC) model to prevent illegal information flow among peers in PS systems. Here, an access right is a pair \"t, op\" of a topic t and an operation op which is publish or subscribe. A peer is allowed to publish an event message with topics and subscribe topics only if the topics are granted to the peer. An event message e is notified to a peer pi if the publication of e and subscription of pi include some common topic. If a peer pi publishes an event message e2 after receiving an event message e1, the event message e2 may bring the event of e1, which the peer pi is not allowed to publish. Here, information in the peer pi illegally flow to another peer. We define the legal flow relation among the peers. Then, we newly propose a subscription-based synchronization (SBS) protocol to prevent illegal information flow. Here, a notification is banned if the notification may cause illegal information flow. We evaluate the SBS protocol in terms of number of notifications banned.",
"title": ""
},
{
"docid": "f33ca4cfba0aab107eb8bd6d3d041b74",
"text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.",
"title": ""
},
{
"docid": "c5428f44292952bfb9443f61aa6d6ce0",
"text": "In this letter, a tunable protection switch device using open stubs for $X$ -band low-noise amplifiers (LNAs) is proposed. The protection switch is implemented using p-i-n diodes. As the parasitic inductance in the p-i-n diodes may degrade the protection performance, tunable open stubs are attached to these diodes to obtain a grounding effect. The performance is optimized for the desired frequency band by adjusting the lengths of the microstrip line open stubs. The designed LNA protection switch is fabricated and measured, and sufficient isolation is obtained for a 200 MHz operating band. The proposed protection switch is suitable for solid-state power amplifier radars in which the LNAs need to be protected from relatively long pulses.",
"title": ""
},
{
"docid": "ba966c2fc67b88d26a3030763d56ed1a",
"text": "Design of a long read-range, reconfigurable operating frequency radio frequency identification (RFID) metal tag is proposed in this paper. The antenna structure consists of two nonconnected load bars and two bowtie patches electrically connected through four pairs of vias to a conducting backplane to form a looped-bowtie RFID tag antenna that is suitable for mounting on metallic objects. The design offers more degrees of freedom to tune the input impedance of the proposed antenna. The load bars, which have a cutoff point on each bar, can be used to reconfigure the operating frequency of the tag by exciting any one of the three possible frequency modes; hence, this tag can be used worldwide for the UHF RFID frequency band. Experimental tests show that the maximum read range of the prototype, placed on a metallic object, are found to be 3.0, 3.2, and 3.3 m, respectively, for the three operating modes, which has been tested for an RFID reader with only 0.4 W error interrupt pending register (EIPR). The paper shows that the simulated and measured results are in good agreement with each other.",
"title": ""
},
{
"docid": "fc2bb6b1dd1c04c7939bd3b4e14ae49b",
"text": "Stochastic differential equations (SDEs) and the Kolmogorov partial differential equations (PDEs) associated to them have been widely used in models from engineering, finance, and the natural sciences. In particular, SDEs and Kolmogorov PDEs, respectively, are highly employed in models for the approximative pricing of financial derivatives. Kolmogorov PDEs and SDEs, respectively, can typically not be solved explicitly and it has been and still is an active topic of research to design and analyze numerical methods which are able to approximately solve Kolmogorov PDEs and SDEs, respectively. Nearly all approximation methods for Kolmogorov PDEs in the literature suffer under the curse of dimensionality or only provide approximations of the solution of the PDE at a single fixed space-time point. In this paper we derive and propose a numerical approximation method which aims to overcome both of the above mentioned drawbacks and intends to deliver a numerical approximation of the Kolmogorov PDE on an entire region [a, b]d without suffering from the curse of dimensionality. Numerical results on examples including the heat equation, the Black-Scholes model, the stochastic Lorenz equation, and the Heston model suggest that the proposed approximation algorithm is quite effective in high dimensions in terms of both accuracy and speed. 1 ar X iv :1 80 6. 00 42 1v 1 [ m at h. N A ] 1 J un 2 01 8",
"title": ""
},
{
"docid": "3029cfa2951d50880e439205a12a5629",
"text": "Neuroevolutionary algorithms are successful methods for optimizing neural networks, especially for learning a neural policy (controller) in reinforcement learning tasks. Their significant advantage over gradient-based algorithms is the capability to search network topology as well as connection weights. However, state-of-the-art topology evolving methods are known to be inefficient compared to weight evolving methods with an appropriately hand-tuned topology. This paper introduces a novel efficient algorithm called CMA-TWEANN for evolving both topology and weights. Its high efficiency is achieved by introducing efficient topological mutation operators and integrating a state-of-the-art function optimization algorithm for weight optimization. Experiments on benchmark reinforcement learning tasks demonstrate that CMA-TWEANN solves tasks significantly faster than existing topology evolving methods. Furthermore, it outperforms weight evolving techniques even when they are equipped with a hand-tuned topology. Additional experiments reveal how and why CMA-TWEANN is the best performing weight evolving method.",
"title": ""
},
{
"docid": "70745e8cdf957b1388ab38a485e98e60",
"text": "Network studies of large-scale brain connectivity have begun to reveal attributes that promote the segregation and integration of neural information: communities and hubs. Network communities are sets of regions that are strongly interconnected among each other while connections between members of different communities are less dense. The clustered connectivity of network communities supports functional segregation and specialization. Network hubs link communities to one another and ensure efficient communication and information integration. This review surveys a number of recent reports on network communities and hubs, and their role in integrative processes. An emerging focus is the shifting balance between segregation and integration over time, which manifest in continuously changing patterns of functional interactions between regions, circuits and systems.",
"title": ""
},
{
"docid": "fb2724d712f76a9c9515ba593b5cdf6c",
"text": "This study used meta-analytic techniques to examine the relationship between emotional intelligence (EI) and performance outcomes. A total of 69 independent studies were located that reported correlations between EI and performance or other variables such as general mental ability (GMA) and the Big Five factors of personality. Results indicated that, across criteria, EI had an operational validity of .23 (k 1⁄4 59, N 1⁄4 9522). Various moderating influences such as the EI measure used, dimensions of EI, scoring method and criterion were evaluated. EI correlated .22 with general mental ability (k 1⁄4 19, N 1⁄4 4158) and .23 (Agreeableness and Openness to Experience; k 1⁄4 14, N 1⁄4 3306) to .34 (Extraversion; k 1⁄4 19, N 1⁄4 3718) with the Big Five factors of personality. Results of various subgroup analyses are presented and implications and future directions are provided. 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "05145a1f9f1d1423acb705159ec28f5e",
"text": "We describe the first sub-quadratic sampling algorithm for the Multiplicative Attribute Graph Model (MAGM) of Kim and Leskovec (2010). We exploit the close connection between MAGM and the Kronecker Product Graph Model (KPGM) of Leskovec et al. (2010), and show that to sample a graph from a MAGM it suffices to sample small number of KPGM graphs and quilt them together. Under a restricted set of technical conditions our algorithm runs in O ( (log2(n)) 3 |E| ) time, where n is the number of nodes and |E| is the number of edges in the sampled graph. We demonstrate the scalability of our algorithm via extensive empirical evaluation; we can sample a MAGM graph with 8 million nodes and 20 billion edges in under 6 hours.",
"title": ""
},
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
}
] |
scidocsrr
|
adea6d0ba3215ef1cac62e4091ef6a24
|
Job Satisfaction in Agile Development Teams: Agile Development as Work Redesign
|
[
{
"docid": "ff6c60d341ba05daa38a2f173eb03b19",
"text": "Despite the importance of online product recommendations (OPR) in e-Commerce transactions, there is still very little understanding about how different recommendation sources affect consumers' beliefs and behavior, and whether these effects are additive, complementary or rivals for different types of products. This study investigates the differential effects of provider recommendations (PR) and consumer reviews (CR) on the instrumental, affective and trusting dimensions of consumer beliefs, and show how these beliefs ultimately influence continued OPR usage and product purchase intentions. This study tests a conceptual model linking PR and CR to four consumer beliefs (perceived usefulness, perceived ease of use, perceived affective quality, and trust) in two different product settings (search products vs. experience products). Results of an experimental study (N = 396) show that users of PR express significantly higher perceived usefulness and perceived ease of use than users of CR, while users of CR express higher trusting beliefs and perceived affective quality than users of PR, resulting in different effect mechanisms towards OPR reuse and purchase intentions in e-Commerce transactions. Further, CR were found to elicit higher perceived usefulness, trusting beliefs and perceived affective quality for experience goods, while PR were found to unfold higher effects on all of these variables for search goods.",
"title": ""
}
] |
[
{
"docid": "8e53336bb4d216d78a6ab79faacb48fc",
"text": "Pattern glare is characterised by symptoms of visual perceptual distortions and visual stress on viewing striped patterns. People with migraine or Meares-Irlen syndrome (visual stress) are especially prone to pattern glare. The literature on pattern glare is reviewed, and the goal of this study was to develop clinical norms for the Wilkins and Evans Pattern Glare Test. This comprises three test plates of square wave patterns of spatial frequency 0.5, 3 and 12 cycles per degree (cpd). Patients are shown the 0.5 cpd grating and the number of distortions that are reported in response to a list of questions is recorded. This is repeated for the other patterns. People who are prone to pattern glare experience visual perceptual distortions on viewing the 3 cpd grating, and pattern glare can be quantified as either the sum of distortions reported with the 3 cpd pattern or as the difference between the number of distortions with the 3 and 12 cpd gratings, the '3-12 cpd difference'. In study 1, 100 patients consulting an optometrist performed the Pattern Glare Test and the 95th percentile of responses was calculated as the limit of the normal range. The normal range for the number of distortions was found to be <4 on the 3 cpd grating and <2 for the 3-12 cpd difference. Pattern glare was similar in both genders but decreased with age. In study 2, 30 additional participants were given the test in the reverse of the usual testing order and were compared with a sub-group from study 1, matched for age and gender. Participants experienced more distortions with the 12 cpd grating if it was presented after the 3 cpd grating. However, the order did not influence the two key measures of pattern glare. In study 3, 30 further participants who reported a medical diagnosis of migraine were compared with a sub-group of the participants in study 1 who did not report migraine or frequent headaches, matched for age and gender. The migraine group reported more symptoms on viewing all gratings, particularly the 3 cpd grating. The only variable to be significantly different between the groups was the 3-12 cpd difference. In conclusion, people have an abnormal degree of pattern glare if they have a Pattern Glare Test score of >3 on the 3 cpd grating or a score of >1 on the 3-12 cpd difference. The literature suggests that these people are likely to have visual stress in everyday life and may therefore benefit from interventions designed to alleviate visual stress, such as precision tinted lenses.",
"title": ""
},
{
"docid": "7b63daa48a700194f04293542c83bb20",
"text": "BACKGROUND\nPresent treatment strategies for rheumatoid arthritis include use of disease-modifying antirheumatic drugs, but a minority of patients achieve a good response. We aimed to test the hypothesis that an improved outcome can be achieved by employing a strategy of intensive outpatient management of patients with rheumatoid arthritis--for sustained, tight control of disease activity--compared with routine outpatient care.\n\n\nMETHODS\nWe designed a single-blind, randomised controlled trial in two teaching hospitals. We screened 183 patients for inclusion. 111 were randomly allocated either intensive management or routine care. Primary outcome measures were mean fall in disease activity score and proportion of patients with a good response (defined as a disease activity score <2.4 and a fall in this score from baseline by >1.2). Analysis was by intention-to-treat.\n\n\nFINDINGS\nOne patient withdrew after randomisation and seven dropped out during the study. Mean fall in disease activity score was greater in the intensive group than in the routine group (-3.5 vs -1.9, difference 1.6 [95% CI 1.1-2.1], p<0.0001). Compared with routine care, patients treated intensively were more likely to have a good response (definition, 45/55 [82%] vs 24/55 [44%], odds ratio 5.8 [95% CI 2.4-13.9], p<0.0001) or be in remission (disease activity score <1.6; 36/55 [65%] vs 9/55 [16%], 9.7 [3.9-23.9], p<0.0001). Three patients assigned routine care and one allocated intensive management died during the study; none was judged attributable to treatment.\n\n\nINTERPRETATION\nA strategy of intensive outpatient management of rheumatoid arthritis substantially improves disease activity, radiographic disease progression, physical function, and quality of life at no additional cost.",
"title": ""
},
{
"docid": "9b32fccec99a66ba01f69d92c966176f",
"text": "Trajectory tracking problem is one of the most important subjects that has been focused with many researchers in past years. In this paper, a Tractor-Trailer type robot including two nonholonomic constraints is analyzed. First, robot kinematic equations are obtained and transformed to the chained-form equations. Next, controllability of robot on the reference trajectory is evaluated and then appropriate reference trajectories for the tractor-trailer robot are generated. Finally a controller based on feedback linearization method is investigated in order to stabilize tracking errors about the origin. Obtained results show that the designed controller performs quite effective.",
"title": ""
},
{
"docid": "cee3833160aa1cc513e96d49b72eeea9",
"text": "Spatial filtering (SF) constitutes an integral part of building EEG-based brain-computer interfaces (BCIs). Algorithms frequently used for SF, such as common spatial patterns (CSPs) and independent component analysis, require labeled training data for identifying filters that provide information on a subject's intention, which renders these algorithms susceptible to overfitting on artifactual EEG components. In this study, beamforming is employed to construct spatial filters that extract EEG sources originating within predefined regions of interest within the brain. In this way, neurophysiological knowledge on which brain regions are relevant for a certain experimental paradigm can be utilized to construct unsupervised spatial filters that are robust against artifactual EEG components. Beamforming is experimentally compared with CSP and Laplacian spatial filtering (LP) in a two-class motor-imagery paradigm. It is demonstrated that beamforming outperforms CSP and LP on noisy datasets, while CSP and beamforming perform almost equally well on datasets with few artifactual trials. It is concluded that beamforming constitutes an alternative method for SF that might be particularly useful for BCIs used in clinical settings, i.e., in an environment where artifact-free datasets are difficult to obtain.",
"title": ""
},
{
"docid": "a278f1c4f6cb1b0e1bda447f70cd7749",
"text": "A digitally controlled oscillator (DCO) to be used in an all-digital phase-locked loop (PLL) is presented which offers a wide operating frequency range, a monotonic gain curve, and compensation for instantaneous supply voltage variation. The monotonic and wide oscillation frequency is achieved by interpolating at the fine tuning block between two nodes selected from a coarse delay line. Supply voltage compensation is obtained by dynamically adjusting the strength of the feedback latch of the delay cell in response to the change of the supply voltage.",
"title": ""
},
{
"docid": "667bca62dd6a9e755b4bae25e2670bb8",
"text": "This paper presents a Phantom Go program. It is based on a MonteCarlo approach. The program plays Phantom Go at an intermediate level.",
"title": ""
},
{
"docid": "87ca3f4c11e4853a4b2a153d5b9f1bfe",
"text": "The study of light verbs and complex predicates is frought wi th dangers and misunderstandings that go beyond the merely terminological. This paper attemp s to pic through the terminological, theoretical and empirical jungle in order to arrive at a nove l understanding of the role of light verbs crosslinguistically. In particular, this paper addresses how light verbs and complex predicates can be identified crosslinguistically, what the relationsh ip between the two is, and whether light verbs must always be associated with uniform syntactic and s emantic properties. Finally, the paper proposes a novel view of how light verbs are situated in the le xicon by addressing some historical data and their relationship with preverbs and verb particle s. Jespersen (1965,Volume VI:117) is generally credited with first coining the termlight verb, which he applied to English V+NP constructions as in (1).",
"title": ""
},
{
"docid": "89dea4ec4fd32a4a61be184d97ae5ba6",
"text": "In this paper, we propose Generative Adversarial Network (GAN) architectures that use Capsule Networks for image-synthesis. Based on the principal of positionalequivariance of features, Capsule Network’s ability to encode spatial relationships between the features of the image helps it become a more powerful critic in comparison to Convolutional Neural Networks (CNNs) used in current architectures for image synthesis. Our proposed GAN architectures learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GANs and its variants that use CNNs. Apart from analyzing the quantitative results corresponding the images generated by different architectures, we also explore the reasons for the lower coverage and diversity explored by the GAN architectures that use CNN critics.",
"title": ""
},
{
"docid": "57e95050bcaf50fdb6c7a5390382a1b7",
"text": "We compare our own embodied conversational agent (ECA) scheme, BotCom, with seven other complex Internet-based ECAs according to recentlypublished information about them, and highlight some important attributes that have received little attention in the construction of realistic ECAs. BotCom incorporates the use of emotions, humor and complex information services. We cover issues that are likely to be of greatest interest for developers of ECAs that, like BotCom, are directed towards intensive commercial use. 1 Using ECAs on the Internet Many embodied conversational agents (ECAs) are targeting the Internet. However, systems that are bound to this global network not only benefit from several advantages of the huge amount of accessible information provided by this medium, but inherit its common problems as well. Among those are the difficulty of relevant search, complexity of available information, unstructuredness, bandwidth limitations etc. So, what are the main arguments in favor of deploying an ECA on the Internet? First of all, the preference for real-time events, real-time information flow, expresses an innate need of mankind. Internet ECAs have this advantage as opposed to any other on-line customer-company communication method, such as web pages, email, guest books, etc. In addition, secondary orality, the communication by dialogues as opposed to monologues, is also far more effective when dealing with humans [5]. Furthemore, even though ECAs and simpler chatterbots may give wrong answers to certain questions, they create some sort of representation of themselves in the customers mind [13]. An ordinary website can be considered not only less interactive than one with an ECA, but the way it operates is closer to monologues than to dialogues. We have developed BotCom, a fully working prototype system, as part of a research project. It is capable of chatting with users about different topics as well as displaying synchronized affective feedback based on a complex emotional state generator, GALA. Moreover, it has a feature of connecting to various information T. Rist et al. (Eds.): IVA 2003, LNAI 2792, pp. 5-12, 2003. Springer-Verlag Berlin Heidelberg 2003 6 Gábor Tatai et al. sources and search engines thus enabling an easily scalable knowledge base. Its primary use will be interactive website navigation, entertainment, marketing and education. BotCom is currently being introduced into commercial use. There is no space to discuss all the features and interesting implementation experiences with our BotCom ECA in this paper. Therefore we focus on some highlights where, we think, our ECA is special or when a theoretical or practical observation has proved to be particularly useful, so that others might benefit from these as well. 2 Comparison of Popular Internet Chatterbots During design and implementation we have analyzed, evaluated and constantly monitored existing ECAs in order to reinforce and validate our development approach. We did not follow only one methodology; several of them ([2], [4], [12], [15]) served as a basis of our own compound method, as, in spite of the similarities, overlaps frequently occurred and all of them contained unique evaluation variables. We studied the following (either commercial or award-wining) chatbots (see Table 1. for the results): Ultra Hal Assistant 4.5 (Zabaware, Inc., http://www.zabaware.com/assistant) Ramona (KurzweiAI.net, http://www.kurzweilai.net/) Elbot (Kiwilogic, http://www.elbot.com/) Ella (Kevin L. Copple, http://www.ellaz.com/EllaASPLoebner/Loebner2002.aspx) Nicole (NativeMinds, http://an1-sj.nativeminds.com/demos_default.html) Lucy (Artificial Life, http://www.artificial-life.com/v5/website.php) Julia (Conversive, http://www.vperson.com) 2.1 Visual Appearance In most cases visualization is typically solved by 2D graphics focusing only on the face, or photo-realistic schemes of still pictures (photos). Some tend to limit animation to only certain parts of the body (e.g. eyes, lips, eye-brows, chin), the roles of which are considered to be important in communication [11]. 3D animations are also applied occasionally, for instance in Lucys case. Despite the more lifelike and realistic appearance of 3D real-time rendered graphics, there is no underpinning evidence of differences in expressiveness amongst cartoons, photos, movies etc., though various studies confirm that users assume high-quality animated ECAs to be more intelligent [15]. Aiko, a female instance of BotCom, runs on the users web interface. The representation of her reactions and emotions is implemented through a 3D pre-processed (pre-rendered), realistic animation. Since the face and gestures provide the significant secondary communication channels [8], only the head, the torso (shoulders, arms) and occasionally the hands were visualized. To be able to diversify and refine the reactions, the collection of animations is extendable, but the right balance should be kept",
"title": ""
},
{
"docid": "d24980c1a1317c8dd055741da1b8c7a7",
"text": "Influence Maximization (IM), which selects a set of <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math> <alternatives><inline-graphic xlink:href=\"li-ieq1-2807843.gif\"/></alternatives></inline-formula> users (called seed set) from a social network to maximize the expected number of influenced users (called influence spread), is a key algorithmic problem in social influence analysis. Due to its immense application potential and enormous technical challenges, IM has been extensively studied in the past decade. In this paper, we survey and synthesize a wide spectrum of existing studies on IM from an <italic>algorithmic perspective</italic>, with a special focus on the following key aspects: (1) a review of well-accepted diffusion models that capture the information diffusion process and build the foundation of the IM problem, (2) a fine-grained taxonomy to classify existing IM algorithms based on their design objectives, (3) a rigorous theoretical comparison of existing IM algorithms, and (4) a comprehensive study on the applications of IM techniques in combining with novel context features of social networks such as topic, location, and time. Based on this analysis, we then outline the key challenges and research directions to expand the boundary of IM research.",
"title": ""
},
{
"docid": "d8a13de3c5ca958b0afac1629930d6e7",
"text": "As the number and the diversity of news outlets on the Web grows, so does the opportunity for \"alternative\" sources of information to emerge. Using large social networks like Twitter and Facebook, misleading, false, or agenda-driven information can quickly and seamlessly spread online, deceiving people or influencing their opinions. Also, the increased engagement of tightly knit communities, such as Reddit and 4chan, further compounds the problem, as their users initiate and propagate alternative information, not only within their own communities, but also to different ones as well as various social media. In fact, these platforms have become an important piece of the modern information ecosystem, which, thus far, has not been studied as a whole.\n In this paper, we begin to fill this gap by studying mainstream and alternative news shared on Twitter, Reddit, and 4chan. By analyzing millions of posts around several axes, we measure how mainstream and alternative news flows between these platforms. Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that \"fringe\" communities often succeed in spreading alternative news to mainstream social networks and the greater Web.",
"title": ""
},
{
"docid": "edba5ee93ead361ac4398c0f06d3ba06",
"text": "We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT, the Web inventory that repurposes the original content of the TED website in a way which is more convenient for MT researchers. The benchmark consists of about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately aligned and rearranged in sentences, for a total of about 3.5M tokens per language. Talks have been partitioned in train, development and test sets similarly in all respects to the MT tasks of the IWSLT 2016 evaluation campaign. In addition to describing the benchmark, we list the problems encountered in preparing it and the novel methods designed to solve them. Baseline MT results and some measures on sentence length are provided as an extrinsic evaluation of the quality of the benchmark.",
"title": ""
},
{
"docid": "c75c8461134f3ad5855ef30a49f377fb",
"text": "Suspicious human activity recognition from surveillance video is an active research area of image processing and computer vision. Through the visual surveillance, human activities can be monitored in sensitive and public areas such as bus stations, railway stations, airports, banks, shopping malls, school and colleges, parking lots, roads, etc. to prevent terrorism, theft, accidents and illegal parking, vandalism, fighting, chain snatching, crime and other suspicious activities. It is very difficult to watch public places continuously, therefore an intelligent video surveillance is required that can monitor the human activities in real-time and categorize them as usual and unusual activities; and can generate an alert. Recent decade witnessed a good number of publications in the field of visual surveillance to recognize the abnormal activities. Furthermore, a few surveys can be seen in the literature for the different abnormal activities recognition; but none of them have addressed different abnormal activities in a review. In this paper, we present the state-of-the-art which demonstrates the overall progress of suspicious activity recognition from the surveillance videos in the last decade. We include a brief introduction of the suspicious human activity recognition with its issues and challenges. This paper consists of six abnormal activities such as abandoned object detection, theft detection, fall detection, accidents and illegal parking detection on road, violence activity detection, and fire detection. In general, we have discussed all the steps those have been followed to recognize the human activity from the surveillance videos in the literature; such as foreground object extraction, object detection based on tracking or non-tracking methods, feature extraction, classification; activity analysis and recognition. The objective of this paper is to provide the literature review of six different suspicious activity recognition systems with its general framework to the researchers of this field.",
"title": ""
},
{
"docid": "5fe4a9e1ef0ba8b98d410e48764acfc3",
"text": "We report an ethnographic study of prosocial behavior inconnection to League of Legends, one of the most popular games in the world. In this game community, the game developer, Riot Games, implemented a system that allowed players to volunteer their time to identify unacceptable player behaviors and punish players associated with these behaviors. With the prosocial goal of improving the community and promoting sportsmanship with in the competitive culture, a small portion of players worked diligently in the system with little reward. In this paper, we use interviews and analysis of forum discussions to examine how players themselves explain their participation in the system situated in the game culture of League of Legends. We show a myriad of social and technical factors that facilitated or hindered players' prosocial behavior. We discuss how our findings might provide generalizable insights for player engagement and community-building in online games.",
"title": ""
},
{
"docid": "f3fb98614d1d8ff31ca977cbf6a15a9c",
"text": "Paraphrase Identification and Semantic Similarity are two different yet well related tasks in NLP. There are many studies on these two tasks extensively on structured texts in the past. However, with the strong rise of social media data, studying these tasks on unstructured texts, particularly, social texts in Twitter is very interesting as it could be more complicated problems to deal with. We investigate and find a set of simple features which enables us to achieve very competitive performance on both tasks in Twitter data. Interestingly, we also confirm the significance of using word alignment techniques from evaluation metrics in machine translation in the overall performance of these tasks.",
"title": ""
},
{
"docid": "89267dbf693643ea53696c7d545254ea",
"text": "Cognitive dissonance theory is applicable to very limited areas of consumer behavior according to the author. Published findings in support of the theory are equivocal; they fail to show that cognitive dissonance is the only possible cause of observed \"dissonance-reducing\" behavior. Experimental evidences are examined and their weaknesses pointed out by the author to justify his position. He also provides suggestions regarding the circumstances under which dissonance reduction may be useful in increasing the repurchase probability of a purchased brand.",
"title": ""
},
{
"docid": "f4e7e0ea60d9697e8fea434990409c16",
"text": "Prognostics is very useful to predict the degradation trend of machinery and to provide an alarm before a fault reaches critical levels. This paper proposes an ARIMA approach to predict the future machine status with accuracy improvement by an improved forecasting strategy and an automatic prediction algorithm. Improved forecasting strategy increases the times of model building and creates datasets for modeling dynamically to avoid using the previous values predicted to forecast and generate the predictions only based on the true observations. Automatic prediction algorithm can satisfy the requirement of real-time prognostics by automates the whole process of ARIMA modeling and forecasting based on the Box-Jenkins's methodology and the improved forecasting strategy. The feasibility and effectiveness of the approach proposed is demonstrated through the prediction of the vibration characteristic in rotating machinery. The experimental results show that the approach can be applied successfully and effectively for prognostics of machine health condition.",
"title": ""
},
{
"docid": "e78c1fed6f3c09642a8c2c592545bea0",
"text": "We present a general framework and algorithmic approach for incremental approximation algorithms. The framework handles cardinality constrained minimization problems, such as the k-median and k-MST problems. Given some notion of ordering on solutions of different cardinalities k, we give solutions for all values of k such that the solutions respect the ordering and such that for any k, our solution is close in value to the value of an optimal solution of cardinality k. For instance, for the k-median problem, the notion of ordering is set inclusion and our incremental algorithm produces solutions such that any k and k', k < k', our solution of size k is a subset of our solution of size k'. We show that our framework applies to this incremental version of the k-median problem (introduced by Mettu and Plaxton [30]), and incremental versions of the k-MST problem, k-vertex cover problem, k-set cover problem, as well as the uncapacitated facility location problem (which is not cardinality-constrained). For these problems we either get new incremental algorithms, or improvements over what was previously known. We also show that the framework applies to hierarchical clustering problems. In particular, we give an improved algorithm for a hierarchical version of the k-median problem introduced by Plaxton [31].",
"title": ""
},
{
"docid": "f391c56dd581d965548062944200e95f",
"text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.",
"title": ""
}
] |
scidocsrr
|
61e0615bfddf8b34de2f7ef0ae75b41f
|
Camera Models and Optical Systems Used in Computer Graphics: Part I, Object-Based Techniques
|
[
{
"docid": "88abea475884eeec1049a573d107c6c9",
"text": "This paper extends the traditional pinhole camera projection geometry used in computer graphics to a more realistic camera model which approximates the effects of a lens and an aperture function of an actual camera. This model allows the generation of synthetic images which have a depth of field and can be focused on an arbitrary plane; it also permits selective modeling of certain optical characteristics of a lens. The model can be expanded to include motion blur and special-effect filters. These capabilities provide additional tools for highlighting important areas of a scene and for portraying certain physical characteristics of an object in an image.",
"title": ""
}
] |
[
{
"docid": "90f1e303325d2d9f56fdcc905924c7bf",
"text": "giving a statistic image for each contrast. P values for activations in the amygdala were corrected for the volume of brain analysed (specified as a sphere with radius 8 mm) 29. Anatomical localization for the group mean-condition-specific activations are reported in standard space 28. In all cases, the localization of the group mean activations was confirmed by registration with the subject's own MRIs. In an initial conditioning phase immediately before scanning, subjects viewed a sequence of greyscale images of four faces taken from a standard set of pictures of facial affect 30. Images of a single face were presented on a computer monitor screen for 75 ms at intervals of 15–25 s (mean 20 s). Each of the four faces was shown six times in a pseudorandom order. Two of the faces had angry expressions (A1 and A2), the other two being neutral (N1 and N2). One of the angry faces (CS+) was always followed by a 1-s 100-dB burst of white noise. In half of the subjects A1 was the CS+ face; in the other half, A2 was used. None of the other faces was ever paired with the noise. Before each of the 12 scanning windows, which occurred at 8-min intervals, a shortened conditioning sequence was played consisting of three repetitions of the four faces. During the 90-s scanning window, which seamlessly followed the conditioning phase, 12 pairs of faces, consisting of a target and mask, were shown at 5-s intervals. The target face was presented for 30 ms and was immediately followed by the masking face for 45 ms (Fig. 1). These stimulus parameters remained constant throughout all scans and effectively prevented any reportable awareness of the target face (which might be a neutral face or an angry face). There were four different conditions (Fig. 1), masked conditioned, non-masked conditioned, masked unconditioned, and non-masked unconditioned. Throughout the experiment, subjects performed the same explicit task, which was to detect any occurrence, however fleeting, of the angry faces. Immediately before the first conditioning sequence, subjects were shown the two angry faces and were instructed, for each stimulus presentation, to press a response button with the index finger of the right hand if one the angry faces appeared, or another button with the middle finger of the right hand if they did not see either of the angry faces. Throughout the acquisition and extinction phases, subjects' SCRs were monitored to …",
"title": ""
},
{
"docid": "904278b251c258d1dac9b652dcd7ee82",
"text": "This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction. With the outsourcing of small tasks becoming easier, for example via Rent-A-Coder or Amazon's Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a robust technique that combines different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial.",
"title": ""
},
{
"docid": "4e0a3dd1401a00ddc9d0620de93f4ecc",
"text": "The spatial-numerical association of response codes (SNARC) effect is the tendency for humans to respond faster to relatively larger numbers on the left or right (or with the left or right hand) and faster to relatively smaller numbers on the other side. This effect seems to occur due to a spatial representation of magnitude either in occurrence with a number line (wherein participants respond to relatively larger numbers faster on the right), other representations such as clock faces (responses are reversed from number lines), or culturally specific reading directions, begging the question as to whether the effect may be limited to humans. Given that a SNARC effect has emerged via a quantity judgement task in Western lowland gorillas and orangutans (Gazes et al., Cog 168:312–319, 2017), we examined patterns of response on a quantity discrimination task in American black bears, Western lowland gorillas, and humans for evidence of a SNARC effect. We found limited evidence for SNARC effect in American black bears and Western lowland gorillas. Furthermore, humans were inconsistent in direction and strength of effects, emphasizing the importance of standardizing methodology and analyses when comparing SNARC effects between species. These data reveal the importance of collecting data with humans in analogous procedures when testing nonhumans for effects assumed to bepresent in humans.",
"title": ""
},
{
"docid": "8fcc1b7e4602649f66817c4c50e10b3d",
"text": "Conventional wisdom suggests that praising a child as a whole or praising his or her traits is beneficial. Two studies tested the hypothesis that both criticism and praise that conveyed person or trait judgments could send a message of contingent worth and undermine subsequent coping. In Study 1, 67 children (ages 5-6 years) role-played tasks involving a setback and received 1 of 3 forms of criticism after each task: person, outcome, or process criticism. In Study 2, 64 children role-played successful tasks and received either person, outcome, or process praise. In both studies, self-assessments, affect, and persistence were measured on a subsequent task involving a setback. Results indicated that children displayed significantly more \"helpless\" responses (including self-blame) on all dependent measures after person criticism or praise than after process criticism or praise. Thus person feedback, even when positive, can create vulnerability and a sense of contingent self-worth.",
"title": ""
},
{
"docid": "1d6a5ba2f937caa1df5f6d32ffd3bcb4",
"text": "The objective of this study is to present an offline control of highly non-linear inverted pendulum system moving on a plane inclined at an angle of 10° from horizontal. The stabilisation was achieved using three different soft-computing control techniques i.e. Proportional-integral-derivative (PID), Fuzzy logic and Adaptive neuro fuzzy inference system (ANFIS). A Matlab-Simulink model of the proposed system was initially developed which was further simulated using PID controllers based on trial and error method. The ANFIS controller were trained using data sets generated from simulation results of PID controller. The ANFIS controllers were designed using only three membership functions. A fuzzy logic control of the proposed system is also shown using nine membership functions. The study compares the three techniques in terms of settling time, maximum overshoot and steady state error. The simulation results are shown with the help of graphs and tables which validates the effectiveness of proposed techniques.",
"title": ""
},
{
"docid": "65fac26fc29ff492eb5a3e43f58ecfb2",
"text": "The introduction of new anticancer drugs into the clinic is often hampered by a lack of qualified biomarkers. Method validation is indispensable to successful biomarker qualification and is also a regulatory requirement. Recently, the fit-for-purpose approach has been developed to promote flexible yet rigorous biomarker method validation, although its full implications are often overlooked. This review aims to clarify many of the scientific and regulatory issues surrounding biomarker method validation and the analysis of samples collected from clinical trial subjects. It also strives to provide clear guidance on validation strategies for each of the five categories that define the majority of biomarker assays, citing specific examples.",
"title": ""
},
{
"docid": "49a53a8cb649c93d685e832575acdb28",
"text": "We address the vehicle detection and classification problems using Deep Neural Networks (DNNs) approaches. Here we answer to questions that are specific to our application including how to utilize DNN for vehicle detection, what features are useful for vehicle classification, and how to extend a model trained on a limited size dataset, to the cases of extreme lighting condition. Answering these questions we propose our approach that outperforms state-of-the-art methods, and achieves promising results on image with extreme lighting conditions.",
"title": ""
},
{
"docid": "4754c9c1ed44986ce562ea12c8b9fb5d",
"text": "In this paper, we present the first 3D discrete curvelet transform. This transform is an extension to the 2D transform described in Candès et al.. The resulting curvelet frame preserves the important properties, such as parabolic scaling, tightness and sparse representation for singularities of codimension one. We describe three different implementations: in-core, out-of-core and MPI-based parallel implementations. Numerical results verify the desired properties of the 3D curvelets and demonstrate the efficiency of our implementations.",
"title": ""
},
{
"docid": "1cee79d4a07b4ef97098be940484afe8",
"text": "We show that existing methods for training preposition error correction systems, whether using well-edited text or error-annotated corpora, do not generalize across very different test sets. We present a new, large errorannotated corpus and use it to train systems that generalize across three different test sets, each from a different domain and with different error characteristics. This new corpus is automatically extracted from Wikipedia revisions and contains over one million instances of preposition corrections.",
"title": ""
},
{
"docid": "79287d0ca833605430fefe4b9ab1fd92",
"text": "Passwords are frequently used in data encryption and user authentication. Since people incline to choose meaningful words or numbers as their passwords, lots of passwords are easy to guess. This paper introduces a password guessing method based on Long Short-Term Memory recurrent neural networks. After training our LSTM neural network with 30 million passwords from leaked Rockyou dataset, the generated 3.35 billion passwords could cover 81.52% of the remaining Rockyou dataset. Compared with PCFG and Markov methods, this method shows higher coverage rate.",
"title": ""
},
{
"docid": "cf9fe52efd734c536d0a7daaf59a9bcd",
"text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"title": ""
},
{
"docid": "50389f4ec27cf68af999ee33c3210edf",
"text": "Rising water temperature associated with climate change is increasingly recognized as a potential stressor for aquatic organisms, particularly for tropical ectotherms that are predicted to have narrow thermal windows relative to temperate ectotherms. We used intermittent flow resting and swimming respirometry to test for effects of temperature increase on aerobic capacity and swim performance in the widespread African cichlid Pseudocrenilabrus multicolor victoriae, acclimated for a week to a range of temperatures (2°C increments) between 24 and 34°C. Standard metabolic rate (SMR) increased between 24 and 32°C, but fell sharply at 34°C, suggesting either an acclimatory reorganization of metabolism or metabolic rate depression. Maximum metabolic rate (MMR) was elevated at 28 and 30°C relative to 24°C. Aerobic scope (AS) increased between 24 and 28°C, then declined to a level comparable to 24°C, but increased dramatically 34°C, the latter driven by the drop in SMR in the warmest treatment. Critical swim speed (Ucrit) was highest at intermediate temperature treatments, and was positively related to AS between 24 and 32°C; however, at 34°C, the increase in AS did not correspond to an increase in Ucrit, suggesting a performance cost at the highest temperature.",
"title": ""
},
{
"docid": "d0148c8d12ac5bdb4afda5d702481180",
"text": "The recently proposed distributional approach to reinforcement learning (DiRL) is centered on learning the distribution of the reward-to-go, often referred to as the value distribution. In this work, we show that the distributional Bellman equation, which drives DiRL methods, is equivalent to a generative adversarial network (GAN) model. In this formulation, DiRL can be seen as learning a deep generative model of the value distribution, driven by the discrepancy between the distribution of the current value, and the distribution of the sum of current reward and next value. We use this insight to propose a GAN-based approach to DiRL, which leverages the strengths of GANs in learning distributions of highdimensional data. In particular, we show that our GAN approach can be used for DiRL with multivariate rewards, an important setting which cannot be tackled with prior methods. The multivariate setting also allows us to unify learning the distribution of values and state transitions, and we exploit this idea to devise a novel exploration method that is driven by the discrepancy in estimating both values and states.",
"title": ""
},
{
"docid": "19f0bf4e45e40ae18616cdf55ee5ab40",
"text": "Fournier's gangrene is a rare process which affects soft tissue in the genital and perirectal area. It can also progress to all different stages of sepsis, and abdominal compartment syndrome can be one of its complications. Two patients in septic shock due to Fournier gangrene were admitted to the Intensive Care Unit of Emergency Department. In both cases, infection started from the scrotum and the necrosis quickly involved genitals, perineal, and inguinal regions. Patients were treated with surgical debridement, protective colostomy, hyperbaric oxygen therapy, and broad-spectrum antibacterial chemotherapy. Vacuum-assisted closure (VAC) therapy was applied to the wound with the aim to clean, decontaminate, and avoid abdominal compartmental syndrome development. Both patients survived and were discharged from Intensive Care Unit after hyperbaric oxygen therapy cycles and abdominal closure.",
"title": ""
},
{
"docid": "a5255efa61de43a3341473facb4be170",
"text": "Differentiation of 3T3-L1 preadipocytes can be induced by a 2-d treatment with a factor \"cocktail\" (DIM) containing the synthetic glucocorticoid dexamethasone (dex), insulin, the phosphodiesterase inhibitor methylisobutylxanthine (IBMX) and fetal bovine serum (FBS). We temporally uncoupled the activities of the four DIM components and found that treatment with dex for 48 h followed by IBMX treatment for 48 h was sufficient for adipogenesis, whereas treatment with IBMX followed by dex failed to induce significant differentiation. Similar results were obtained with C3H10T1/2 and primary mesenchymal stem cells. The 3T3-L1 adipocytes differentiated by sequential treatment with dex and IBMX displayed insulin sensitivity equivalent to DIM adipocytes, but had lower sensitivity to ISO-stimulated lipolysis and reduced triglyceride content. The nondifferentiating IBMX-then-dex treatment produced transient expression of adipogenic transcriptional regulatory factors C/EBPbeta and C/EBPdelta, and little induction of terminal differentiation factors C/EBPalpha and PPARgamma. Moreover, the adipogenesis inhibitor preadipocyte factor-1 (Pref-1) was repressed by DIM or by dex-then-IBMX, but not by IBMX-then-dex treatment. We conclude that glucocorticoids drive preadipocytes to a novel intermediate cellular state, the dex-primed preadipocyte, during adipogenesis in cell culture, and that Pref-1 repression may be a cell fate determinant in preadipocytes.",
"title": ""
},
{
"docid": "abc2d0757184f5c50e4f2b3a6dabb56c",
"text": "This paper describes the hardware implementation of the RANdom Sample Consensus (RANSAC) algorithm for featured-based image registration applications. The Multiple-Input Signature Register (MISR) and the index register are used to achieve the random sampling effect. The systolic array architecture is adopted to implement the forward elimination step in the Gaussian elimination. The computational complexity in the forward elimination is reduced by sharing the coefficient matrix. As a result, the area of the hardware cost is reduced by more than 50%. The proposed architecture is realized using Verilog and achieves real-time calculation on 30 fps 1024 * 1024 video stream on 100 MHz clock.",
"title": ""
},
{
"docid": "e7cf8ca46f578bdee582b3c80a875bd8",
"text": "Many real world pattern classification problems involve the process and analysis of multiple variables in temporal domain. This type of problem is referred to as Multivariate Time Series (MTS) problem. It remains a challenging problem due to the nature of time series data: high dimensionality, large data size and updating continuously. In this paper, we use three types of physiological signals from the driver to predict lane changes before the event actually occurs. These are the electrocardiogram (ECG), galvanic skin response (GSR), and respiration rate (RR) and were determined, in prior studies, to best reflect a driver’s response to the driving environment. A novel Group-wise Convolutional Neural Network, MTS-GCNN model is proposed for MTS pattern classification. In our MTS-GCNN model, we present a new structure learning algorithm in training stage. The algorithm exploits the covariance structure over multiple time series to partition input volume into groups, then learns the MTS-GCNN structure explicitly by clustering input sequences with spectral clustering. Different from other feature-based classification approaches, our MTS-GCNN can select and extract the suitable internal structure to generate temporal and spatial features automatically by using convolution and down-sample operations. The experimental results showed that, in comparison to other state-of-the-art models, our MTS-GCNN performs significantly better in terms of prediction accuracy.",
"title": ""
},
{
"docid": "103f95f36a5d740bbfa908f25f30514b",
"text": "We present the design, modeling, and implementation of a novel pneumatic actuator, the Pneumatic Reel Actuator (PRA). The PRA is highly extensible, lightweight, capable of operating in compression and tension, compliant, and inexpensive. An initial prototype of the PRA can reach extension ratios greater than 16:1, has a force-to-weight ratio over 28:1, reach speeds of 0.87 meters per second, and can be constructed with parts totaling less than $4 USD. We have developed a model describing the actuator and have conducted experiments characterizing the actuator's performance in regards to force, extension, pressure, and speed. We have implemented two parallel robotic applications in the form of a three degree of freedom robot arm and a tetrahedral robot.",
"title": ""
}
] |
scidocsrr
|
eed761de29abe175298b5f6dfb097529
|
Deep Feature Learning for Graphs
|
[
{
"docid": "a917a0ed4f9082766aeef29cb82eeb27",
"text": "Roles represent node-level connectivity patterns such as star-center, star-edge nodes, near-cliques or nodes that act as bridges to different regions of the graph. Intuitively, two nodes belong to the same role if they are structurally similar. Roles have been mainly of interest to sociologists, but more recently, roles have become increasingly useful in other domains. Traditionally, the notion of roles were defined based on graph equivalences such as structural, regular, and stochastic equivalences. We briefly revisit these early notions and instead propose a more general formulation of roles based on the similarity of a feature representation (in contrast to the graph representation). This leads us to propose a taxonomy of three general classes of techniques for discovering roles that includes (i) graph-based roles, (ii) feature-based roles, and (iii) hybrid roles. We also propose a flexible framework for discovering roles using the notion of similarity on a feature-based representation. The framework consists of two fundamental components: (a) role feature construction and (b) role assignment using the learned feature representation. We discuss the different possibilities for discovering feature-based roles and the tradeoffs of the many techniques for computing them. Finally, we discuss potential applications and future directions and challenges.",
"title": ""
},
{
"docid": "a9bc9d9098fe852d13c3355ab6f81edb",
"text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.",
"title": ""
}
] |
[
{
"docid": "16bd1ca1e6320e0875dede14e7a2cc7d",
"text": "Software process is viewed as an important factor to deliver high quality products. Although there have been several Software Process Models proposed, the software processes are still short of formal descriptions. This paper presents an ontology-based approach to express software processes at the conceptual level. An OWL-based ontology for software processes, called SPO (Software Process Ontology), is designed, and it is extended to generate ontologies for specific process models, such as CMMI and ISO/IEC 15504. A prototype of a web-based process assessment tool based on SPO is developed to illustrate the advantages of this approach. Finally, some further research in this direction is outlined.",
"title": ""
},
{
"docid": "02aed3ad7a5a4a70cfb3f9f4923e3a34",
"text": "Social media platforms such as Facebook are now a ubiquitous part of everyday life for many people. New media scholars posit that the participatory culture encouraged by social media gives rise to new forms of literacy skills that are vital to learning. However, there have been few attempts to use analytics to understand the new media literacy skills that may be embedded in an individual's participation in social media. In this paper, I collect raw activity data that was shared by an exploratory sample of Facebook users. I then utilize factor analysis and regression models to show how (a) Facebook members' online activity coalesce into distinct categories of social media behavior and (b) how these participatory behaviors correlate with and predict measures of new media literacy skills. The study demonstrates the use of analytics to understand the literacies embedded in people's social media activity. The implications speak to the potential of social learning analytics to identify and predict new media literacy skills from data streams in social media platforms.",
"title": ""
},
{
"docid": "be6ed89571fbd1b0720f00d0338d514b",
"text": "We perform sensitivity analyses to assess the impact of missing data on the structural properties of social networks. The social network is conceived of as being generated by a bipartite graph, in which actors are linked together via multiple interaction contexts or affiliations. We discuss three principal missing data mechanisms: network boundary specification (non-inclusion of actors or affiliations), survey non-response, and censoring by vertex degree (fixed choice design), examining their impact on the scientific collaboration network from the Los Alamos E-print Archive as well as random bipartite graphs. The simulation results show that network boundary specification and fixed choice designs can dramatically alter estimates of network-level statistics. The observed clustering and assortativity coefficients are overestimated via omission of affiliations or fixed choice thereof, and underestimated via actor non-response, which results in inflated measurement error. We also find that social networks with multiple interaction contexts may have certain interesting properties due to the presence of overlapping cliques. In particular, assortativity by degree does not necessarily improve network robustness to random omission of nodes as predicted by current theory.",
"title": ""
},
{
"docid": "d4c55e8e70392b7f7a9bcfe325b7a0da",
"text": "BACKGROUND\nFollicular mucinosis coexisting with lymphoproliferative disorders has been thoroughly debated. However, it has been rarely reported in association with inflammatory disorders.\n\n\nMETHODS\nThirteen cases have been retrieved, and those with cutaneous lymphoma or alopecia mucinosa were excluded.\n\n\nRESULTS\nFollicular mucinosis was found in the setting of squamous cell carcinoma, seborrheic keratosis, simple prurigo, acne vulgaris, dextrometorphan-induced phototoxicity, polymorphous light eruption (2 cases), insect bite (2 cases), tick bite, discoid lupus erythematosus, drug-related vasculitis, and demodecidosis. Unexpectedly, our observations revealed a preponderating accumulation of mucin related to photo-exposed areas, sun-associated dermatoses, and histopathologic solar elastosis. The amount of mucin filling the follicles apparently correlated with the intensity of perifollicular inflammatory infiltrate, which was present in all cases. The concurrence of dermal interstitial mucin was found in 7 cases (54%).\n\n\nCONCLUSIONS\nThe concurrence of interstitial dermal mucinosis or the potential role of both ultraviolet radiation and the perifollicular inflammatory infiltrates in its pathogenesis deserves further investigations. Precise recognition and understanding of this distinctive, reactive histological pattern may prevent our patients from unnecessary diagnostic and therapeutic strategies.",
"title": ""
},
{
"docid": "87a8009147398908c79c927654f2039d",
"text": "This paper presents a new adaptive binarization technique for degraded hand-held camera-captured document images. The state-of-the-art locally adaptive binarization methods are sensitive to the values of free parameter. This problem is more critical when binarizing degraded camera-captured document images because of distortions like non-uniform illumination, bad shading, blurring, smearing and low resolution. We demonstrate in this paper that local binarization methods are not only sensitive to the selection of free parameters values (either found manually or automatically), but also sensitive to the constant free parameters values for all pixels of a document image. Some range of values of free parameters are better for foreground regions and some other range of values are better for background regions. For overcoming this problem, we present an adaptation of a state-of-the-art local binarization method such that two different set of free parameters values are used for foreground and background regions respectively. We present the use of ridges detection for rough estimation of foreground regions in a document image. This information is then used to calculate appropriate threshold using different set of free parameters values for the foreground and background regions respectively. The evaluation of the method using an OCR-based measure and a pixel-based measure show that our method achieves better performance as compared to state-of-the-art global and local binarization methods.",
"title": ""
},
{
"docid": "ac7dd65b4f09aba635d399a2bd86ff99",
"text": "We study the role of the second language in bilingual word embeddings in monolingual semantic evaluation tasks. We find strongly and weakly positive correlations between down-stream task performance and second language similarity to the target language. Additionally, we show how bilingual word embeddings can be employed for the task of semantic language classification and that joint semantic spaces vary in meaningful ways across second languages. Our results support the hypothesis that semantic language similarity is influenced by both structural similarity as well as geography/contact.",
"title": ""
},
{
"docid": "7962440362fd5b955f83784a0068f8b5",
"text": "Data warehousing is one of the major research topics of appliedside database investigators. Most of the work to date has focused on building large centralized systems that are integrated repositories founded on pre-existing systems upon which all corporate-wide data are based. Unfortunately, this approach is very expensive and tends to ignore the advantages realized during the past decade in the area of distribution and support for data localization in a geographically dispersed corporate structure. This research investigates building distributed data warehouses with particular emphasis placed on distribution design for the data warehouse environment. The article provides an architectural model for a distributed data warehouse, the formal definition of the relational data model for data warehouse and a methodology for distributed data warehouse design along with a “horizontal” fragmentation algorithm for the fact relation.",
"title": ""
},
{
"docid": "c9748c67c2ab17cfead44fe3b486883d",
"text": "Entropy coding is an integral part of most data compression systems. Huffman coding (HC) and arithmetic coding (AC) are two of the most widely used coding methods. HC can process a large symbol alphabet at each step allowing for fast encoding and decoding. However, HC typically provides suboptimal data rates due to its inherent approximation of symbol probabilities to powers of 1 over 2. In contrast, AC uses nearly accurate symbol probabilities, hence generally providing better compression ratios. However, AC relies on relatively slow arithmetic operations making the implementation computationally demanding. In this paper we discuss asymmetric numeral systems (ANS) as a new approach to entropy coding. While maintaining theoretical connections with AC, the proposed ANS-based coding can be implemented with much less computational complexity. While AC operates on a state defined by two numbers specifying a range, an ANS-based coder operates on a state defined by a single natural number such that the x ∈ ℕ state contains ≈ log2(x) bits of information. This property allows to have the entire behavior for a large alphabet summarized in the form of a relatively small table (e.g. a few kilobytes for a 256 size alphabet). The proposed approach can be interpreted as an equivalent to adding fractional bits to a Huffman coder to combine the speed of HC and the accuracy offered by AC. Additionally, ANS can simultaneously encrypt a message encoded this way. Experimental results demonstrate effectiveness of the proposed entropy coder.",
"title": ""
},
{
"docid": "84bc3c35868aa02778eef4350153c092",
"text": "Google’s PageRank method was developed to evaluate the importance of web-pages via their link structure. The mathematics of PageRank, however, are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It’s even used for systems analysis of road networks, as well as biology, chemistry, neuroscience, and physics. We’ll see the mathematics and ideas that unite these diverse applications.",
"title": ""
},
{
"docid": "53e9a5a6ce764ca0d3399d7097c3a71b",
"text": "Machine Learning is a field of research aimed at constructing intelligent machines that gain and improve their skills by learning and adaptation. As such, Machine Learning research addresses several classes of learning problems, including for instance, supervised and unsupervised learning. Arguably, the most ubiquitous and realistic class of learning problems, faced by both living creatures and artificial agents, is known as Reinforcement Learning. Reinforcement Learning problems are characterized by a long-term interaction between the learning agent and a dynamic, unfamiliar, uncertain, possibly even hostile environment. Mathematically, this interaction is modeled as a Markov Decision Process (MDP). Probably the most significant contribution of this thesis is in the introduction of a new class of Reinforcement Learning algorithms, which leverage the power of a statistical set of tools known as Gaussian Processes. This new approach to Reinforcement Learning offers viable solutions to some of the major limitations of current Reinforcement Learning methods, such as the lack of confidence intervals for performance predictions, and the difficulty of appropriately reconciling exploration with exploitation. Analysis of these algorithms and their relationship with existing methods also provides us with new insights into the assumptions underlying some of the most popular Reinforcement Learning algorithms to date.",
"title": ""
},
{
"docid": "008ad9d12f1a8451f46be59eeef5bf0b",
"text": "0957-4174/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.05.070 ⇑ Corresponding author. Tel.: +34 953 212898; fax: E-mail address: msaleh@ujaen.es (M. Rushdi Saleh 1 http://www.amazon.com. 2 http://www.epinions.com. 3 http://www.imdb.com. Recently, opinion mining is receiving more attention due to the abundance of forums, blogs, e-commerce web sites, news reports and additional web sources where people tend to express their opinions. Opinion mining is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. In this paper we explore this new research area applying Support Vector Machines (SVM) for testing different domains of data sets and using several weighting schemes. We have accomplished experiments with different features on three corpora. Two of them have already been used in several works. The last one has been built from Amazon.com specifically for this paper in order to prove the feasibility of the SVM for different domains. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4cc4c8fd07f30b5546be2376c1767c19",
"text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.",
"title": ""
},
{
"docid": "2804384964bc8996e6574bdf67ed9cb5",
"text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.",
"title": ""
},
{
"docid": "5010761051983f5de1f18a11d477f185",
"text": "Financial forecasting has been challenging problem due to its high non-linearity and high volatility. An Artificial Neural Network (ANN) can model flexible linear or non-linear relationship among variables. ANN can be configured to produce desired set of output based on set of given input. In this paper we attempt at analyzing the usefulness of artificial neural network for forecasting financial data series with use of different algorithms such as backpropagation, radial basis function etc. With their ability of adapting non-linear and chaotic patterns, ANN is the current technique being used which offers the ability of predicting financial data more accurately. \"A x-y-1 network topology is adopted because of x input variables in which variable y was determined by the number of hidden neurons during network selection with single output.\" Both x and y were changed.",
"title": ""
},
{
"docid": "e92831c27bc5a65ca3b45a4f3671016c",
"text": "A library of 600 taxonomically diverse Panamanian plant extracts was screened for DPPH scavenging and UV-B protective activities, and the methanolic extracts of Mosquitoxylum jamaicense, Combretum cacoucia, and Casearia commersionia were submitted to HPLC-based activity profiling. The compounds located in the active time windows were isolated and identified as gallic acid derivatives and flavonoids. Gallic acid methyl ester (3) and digallic acid derivatives (2, 6) showed the highest DPPH scavenging activity (<10 μg/mL), while protocatechuic acid (7) and isoquercitrin (10) exhibited the highest UV-B protective properties.",
"title": ""
},
{
"docid": "5213aa65c5a291f0839046607dcf5f6c",
"text": "The distribution and mobility of chromium in the soils and sludge surrounding a tannery waste dumping area was investigated to evaluate its vertical and lateral movement of operational speciation which was determined in six steps to fractionate the material in the soil and sludge into (i) water soluble, (ii) exchangeable, (iii) carbonate bound, (iv) reducible, (v) oxidizable, and (vi) residual phases. The present study shows that about 63.7% of total chromium is mobilisable, and 36.3% of total chromium is nonbioavailable in soil, whereas about 30.2% of total chromium is mobilisable, and 69.8% of total chromium is non-bioavailable in sludge. In contaminated sites the concentration of chromium was found to be higher in the reducible phase in soils (31.3%) and oxidisable phases in sludge (56.3%) which act as the scavenger of chromium in polluted soils. These results also indicate that iron and manganese rich soil can hold chromium that will be bioavailable to plants and biota. Thus, results of this study can indicate the status of bioavailable of chromium in this area, using sequential extraction technique. So a suitable and proper management of handling tannery sludge in the said area will be urgently needed to the surrounding environment as well as ecosystems.",
"title": ""
},
{
"docid": "93a2d7072ab88ad77c23f7c1dc5a129c",
"text": "In recent decades, the need for efficient and effective image search from large databases has increased. In this paper, we present a novel shape matching framework based on structures common to similar shapes. After representing shapes as medial axis graphs, in which nodes show skeleton points and edges connect nearby points, we determine the critical nodes connecting or representing a shape’s different parts. By using the shortest path distance from each skeleton (node) to each of the critical nodes, we effectively retrieve shapes similar to a given query through a transportation-based distance function. To improve the effectiveness of the proposed approach, we employ a unified framework that takes advantage of the feature representation of the proposed algorithm and the classification capability of a supervised machine learning algorithm. A set of shape retrieval experiments including a comparison with several well-known approaches demonstrate the proposed algorithm’s efficacy and perturbation experiments show its robustness.",
"title": ""
},
{
"docid": "1a41bd991241ed1751beda2362465a0d",
"text": "Over the last decade, Convolutional Neural Networks (CNN) saw a tremendous surge in performance. However, understanding what a network has learned still proves to be a challenging task. To remedy this unsatisfactory situation, a number of groups have recently proposed different methods to visualize the learned models. In this work we suggest a general taxonomy to classify and compare these methods, subdividing the literature into three main categories and providing researchers with a terminology to base their works on. Furthermore, we introduce the FeatureVis library for MatConvNet: an extendable, easy to use open source library for visualizing CNNs. It contains implementations from each of the three main classes of visualization methods and serves as a useful tool for an enhanced understanding of the features learned by intermediate layers, as well as for the analysis of why a network might fail for certain examples.",
"title": ""
},
{
"docid": "7098df58dc9f86c9b462610f03bd97a6",
"text": "The advent of the computer and computer science, and in particular virtual reality, offers new experiment possibilities with numerical simulations and introduces a new type of investigation for the complex systems study : the in virtuo experiment. This work lies on the framework of multi-agent systems. We propose a generic model for systems biology based on reification of the interactions, on a concept of organization and on a multi-model approach. By ``reification'' we understand that interactions are considered as autonomous agents. The aim has been to combine the systemic paradigm and the virtual reality to provide an application able to collect, simulate, experiment and understand the knowledge owned by different biologists working around an interdisciplinary subject. In that case, we have been focused on the urticaria disease understanding. The method permits to integrate different natures of model. We have modeled biochemical reactions, molecular diffusion, cell organisations and mechanical interactions. It also permits to embed different expert system modeling methods like fuzzy cognitive maps.",
"title": ""
},
{
"docid": "33cf6c26de09c7772a529905d9fa6b5c",
"text": "Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation.\n This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.",
"title": ""
}
] |
scidocsrr
|
306ffa9c2027284821b57b25cd1dd2c5
|
Mining Online User-Generated Content: Using Sentiment Analysis Technique to Study Hotel Service Quality
|
[
{
"docid": "5491c265a1eb7166bb174097b49d258e",
"text": "The importance of service quality for business performance has been recognized in the literature through the direct effect on customer satisfaction and the indirect effect on customer loyalty. The main objective of the study was to measure hotels' service quality performance from the customer perspective. To do so, a performance-only measurement scale (SERVPERF) was administered to customers stayed in three, four and five star hotels in Aqaba and Petra. Although the importance of service quality and service quality measurement has been recognized, there has been limited research that has addressed the structure and antecedents of the concept for the hotel industry. The clarification of the dimensions is important for managers in the hotel industry as it identifies the bundles of service attributes consumers find important. The results of the study demonstrate that SERVPERF is a reliable and valid tool to measure service quality in the hotel industry. The instrument consists of five dimensions, namely \"tangibles\", \"responsiveness\", \"empathy\", \"assurance\" and \"reliability\". Hotel customers are expecting more improved services from the hotels in all service quality dimensions. However, hotel customers have the lowest perception scores on empathy and tangibles. In the light of the results, possible managerial implications are discussed and future research subjects are recommended.",
"title": ""
},
{
"docid": "586ba74140fb7f68cc7c5b0990fb7352",
"text": "Hotel companies are struggling to keep up with the rapid consumer adoption of social media. Although many companies have begun to develop social media programs, the industry has yet to fully explore the potential of this emerging data and communication resource. The revenue management department, as it evolves from tactical inventory management to a more expansive role across the organization, is poised to be an early adopter of the opportunities afforded by social media. We propose a framework for evaluating social media-related revenue management opportunities, discuss the issues associated with leveraging these opportunities and propose a roadmap for future research in this area. Journal of Revenue and Pricing Management (2011) 10, 293–305. doi:10.1057/rpm.2011.12; published online 6 May 2011",
"title": ""
},
{
"docid": "5f366ed9a90448be28c1ec9249b4ec96",
"text": "With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.",
"title": ""
}
] |
[
{
"docid": "da321cc3b6549650d24bb467468ffaf1",
"text": "This tutorial provides an introduction to the Simultaneous Localisation and Mapping (SLAM) method and the extensive research on SLAM that has been undertaken. Part I of this tutorial described the essential SLAM problem. Part II of this tutorial (this paper) is concerned with recent advances in computational methods and in new formulations of the SLAM problem for large scale and complex",
"title": ""
},
{
"docid": "35d11265d367c6eeca6f3dfb8ef67a36",
"text": "A synthetic aperture radar (SAR) can produce high-resolution two-dimensional images of mapped areas. The SAR comprises a pulsed transmitter, an antenna, and a phase-coherent receiver. The SAR is borne by a constant velocity vehicle such as an aircraft or satellite, with the antenna beam axis oriented obliquely to the velocity vector. The image plane is defined by the velocity vector and antenna beam axis. The image orthogonal coordinates are range and cross range (azimuth). The amplitude and phase of the received signals are collected for the duration of an integration time after which the signal is processed. High range resolution is achieved by the use of wide bandwidth transmitted pulses. High azimuth resolution is achieved by focusing, with a signal processing technique, an extremely long antenna that is synthesized from the coherent phase history. The pulse repetition frequency of the SAR is constrained within bounds established by the geometry and signal ambiguity limits. SAR operation requires relative motion between radar and target. Nominal velocity values are assumed for signal processing and measurable deviations are used for error compensation. Residual uncertainties and high-order derivatives of the velocity which are difficult to compensate may cause image smearing, defocusing, and increased image sidelobes. The SAR transforms the ocean surface into numerous small cells, each with dimensions of range and azimuth resolution. An image of a cell can be produced provided the radar cross section of the cell is sufficiently large and the cell phase history is deterministic. Ocean waves evidently move sufficiently uniformly to produce SAR images which correlate well with optical photographs and visual observations. The relationship between SAR images and oceanic physical features is not completely understood, and more analyses and investigations are desired.",
"title": ""
},
{
"docid": "7e40c7145f4613f12e7fc13646f3927c",
"text": "One strategy for intelligent agents in order to reach their goals is to plan their actions in advance. This can be done by simulating how the agent’s actions affect the environment and how it evolves independently of the agent. For this simulation, a model of the environment is needed. However, the creation of this model might be labor-intensive and it might be computational complex to evaluate during simulation. That is why, we suggest to equip an intelligent agent with a learned intuition about the dynamics of its environment by utilizing the concept of intuitive physics. To demonstrate our approach, we used an agent that can freely move in a two dimensional floor plan. It has to collect moving targets while avoiding the collision with static and dynamic obstacles. In order to do so, the agent plans its actions up to a defined planning horizon. The performance of our agent, which intuitively estimates the dynamics of its surrounding objects based on artificial neural networks, is compared to an agent which has a physically exact model of the world and one that acts randomly. The evaluation shows comparatively good results for the intuition based agent considering it uses only a quarter of the computation time in comparison to the agent with a physically exact model.",
"title": ""
},
{
"docid": "1ce476577e092ee91d54afc672f29196",
"text": "In this paper we continue to investigate how the deep neural network (DNN) based acoustic models for automatic speech recognition can be trained without hand-crafted feature extraction. Previously, we have shown that a simple fully connected feedforward DNN performs surprisingly well when trained directly on the raw time signal. The analysis of the weights revealed that the DNN has learned a kind of short-time time-frequency decomposition of the speech signal. In conventional feature extraction pipelines this is done manually by means of a filter bank that is shared between the neighboring analysis windows. Following this idea, we show that the performance gap between DNNs trained on spliced hand-crafted features and DNNs trained on raw time signal can be strongly reduced by introducing 1D-convolutional layers. Thus, the DNN is forced to learn a short-time filter bank shared over a longer time span. This also allows us to interpret the weights of the second convolutional layer in the same way as 2D patches learned on critical band energies by typical convolutional neural networks. The evaluation is performed on an English LVCSR task. Trained on the raw time signal, the convolutional layers allow to reduce the WER on the test set from 25.5% to 23.4%, compared to an MFCC based result of 22.1% using fully connected layers.",
"title": ""
},
{
"docid": "0fed6d4a16e8071a6b39db70350b711a",
"text": "Cloud manufacturing: a new manufacturing paradigm Lin Zhang a b , Yongliang Luo a b , Fei Tao a b , Bo Hu Li a b c , Lei Ren a b , Xuesong Zhang a b , Hua Guo a b , Ying Cheng a b , Anrui Hu a b & Yongkui Liu a b a School of Automation Science and Electrical Engineering, Beihang University , Beijing , 100191 , P.R. , China b Engineering Research Center of Complex Product Advanced Manufacturing Systems, Ministry of Education, Beihang University , Beijing , 100191 , P.R. , China c Beijing Simulation Center , Beijing 100854 , P.R. , China Published online: 21 May 2012.",
"title": ""
},
{
"docid": "2363fc33282d50e0cfab71672eb5dc2a",
"text": "Most previous solutions to the schema matching problem rely in some fashion upon identifying \"similar\" column names in the schemas to be matched, or by recognizing common domains in the data stored in the schemas. While each of these approaches is valuable in many cases, they are not infallible, and there exist instances of the schema matching problem for which they do not even apply. Such problem instances typically arise when the column names in the schemas and the data in the columns are \"opaque\" or very difficult to interpret. In this paper we propose a two-step technique that works even in the presence of opaque column names and data values. In the first step, we measure the pair-wise attribute correlations in the tables to be matched and construct a dependency graph using mutual information as a measure of the dependency between attributes. In the second stage, we find matching node pairs in the dependency graphs by running a graph matching algorithm. We validate our approach with an experimental study, the results of which suggest that such an approach can be a useful addition to a set of (semi) automatic schema matching techniques.",
"title": ""
},
{
"docid": "db76ba085f43bc826f103c6dd4e2ebb5",
"text": "It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory-augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles.",
"title": ""
},
{
"docid": "28ff541a446bfb7783d1fae2492df734",
"text": "Using an advanced thin wafer technology, we have successfully fabricated the next generation 650V class IGBT with an improved SOA and maintaining the narrow distribution of the electrical characteristics for industrial applications. The applied techniques were the finer pattern transistor cell, the thin wafer process and the optimized back side doping concentration profiles. With the well organized back-side wafer process, the practically large chip has achieved without any sacrifice of the production yield. As a results, VCEsat-Eoff trade-off relationship and an Energy of Short Circuit by active Area (ESC/A) are improved in comparison with the conventional Punch Through (PT) structure.",
"title": ""
},
{
"docid": "8477b50ea5b4dd76f0bf7190ba05c284",
"text": "It is shown how Conceptual Graphs and Formal Concept Analysis may be combined to obtain a formalization of Elementary Logic which is useful for knowledge representation and processing. For this, a translation of conceptual graphs to formal contexts and concept lattices is described through an example. Using a suitable mathematization of conceptual graphs, basics of a uniied mathematical theory for Elementary Logic are proposed.",
"title": ""
},
{
"docid": "93ed81d5244715aaaf402817aa674310",
"text": "Automatically recognized terminology is widely used for various domain-specific texts processing tasks, such as machine translation, information retrieval or ontology construction. However, there is still no agreement on which methods are best suited for particular settings and, moreover, there is no reliable comparison of already developed methods. We believe that one of the main reasons is the lack of state-of-the-art methods implementations, which are usually non-trivial to recreate. In order to address these issues, we present ATR4S, an open-source software written in Scala that comprises more than 15 methods for automatic terminology recognition (ATR) and implements the whole pipeline from text document preprocessing, to term candidates collection, term candidates scoring, and finally, term candidates ranking. It is highly scalable, modular and configurable tool with support of automatic caching. We also compare 13 state-of-the-art methods on 7 open datasets by average precision and processing time. Experimental comparison reveals that no single method demonstrates best average precision for all datasets and that other available tools for ATR do not contain the best methods.",
"title": ""
},
{
"docid": "1913c6ce69e543a3ae9a90b73c9efddd",
"text": "Cooperative Intelligent Transportation Systems, mainly represented by vehicular ad hoc networks (VANETs), are among the key components contributing to the Smart City and Smart World paradigms. Based on the continuous exchange of both periodic and event triggered messages, smart vehicles can enhance road safety, while also providing support for comfort applications. In addition to the different communication protocols, securing such communications and establishing a certain trustiness among vehicles are among the main challenges to address, since the presence of dishonest peers can lead to unwanted situations. To this end, existing security solutions are typically divided into two main categories, cryptography and trust, where trust appeared as a complement to cryptography on some specific adversary models and environments where the latter was not enough to mitigate all possible attacks. In this paper, we provide an adversary-oriented survey of the existing trust models for VANETs. We also show when trust is preferable to cryptography, and the opposite. In addition, we show how trust models are usually evaluated in VANET contexts, and finally, we point out some critical scenarios that existing trust models cannot handle, together with some possible solutions.",
"title": ""
},
{
"docid": "7855f5c3a3abec2f31c3ef9b3b65d9bb",
"text": "BLEU is the de facto standard machine translation (MT) evaluation metric. However, because BLEU computes a geometric mean of n-gram precisions, it often correlates poorly with human judgment on the sentence-level. Therefore, several smoothing techniques have been proposed. This paper systematically compares 7 smoothing techniques for sentence-level BLEU. Three of them are first proposed in this paper, and they correlate better with human judgments on the sentence-level than other smoothing techniques. Moreover, we also compare the performance of using the 7 smoothing techniques in statistical machine translation tuning.",
"title": ""
},
{
"docid": "74c86a2ff975d8298b356f0243e82ab0",
"text": "Building intelligent agents that can communicate with and learn from humans in natural language is of great value. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the justlearned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "8c9c9ad5e3d19b56a096e519cc6e3053",
"text": "Cebocephaly and sirenomelia are uncommon birth defects. Their association is extremely rare; however, the presence of spina bifida with both conditions is not unexpected. We report on a female still-birth with cebocephaly, alobar holoprosencephaly, cleft palate, lumbar spina bifida, sirenomelia, a single umbilical artery, and a 46,XX karyotype, but without maternal diabetes mellitus. Our case adds to the examples of overlapping cephalic and caudal defects, possibly related to vulnerability of the midline developmental field or axial mesodermal dysplasia spectrum.",
"title": ""
},
{
"docid": "3066649e6dfe8579b5eb2a82eecb93ea",
"text": "Little is known about how components of executive function (EF) jointly and uniquely predict different aspects of academic achievement and how this may vary across cultural contexts. In the current study, 119 Chinese and 139 American preschoolers were tested on a battery of EF tasks (i.e., inhibition, working memory, and attentional control) as well as academic achievement tasks (i.e., reading and mathematics). Results demonstrate that although working memory performance in both cultures was comparable, Chinese children outperformed American children on inhibition and attentional control tasks. In addition, the relation between components of EF and achievement was similar in the two countries. Working memory uniquely predicted academic achievement, with some intriguing patterns in regard to tasks requiring complex processing. Inhibition uniquely predicted counting but did not uniquely predict calculation. Attentional control predicted most aspects of achievement uniformly and was the most robust predictor for reading in both countries. In sum, the data provide insight into both cultural variability and consistency in the development of EF during early childhood.",
"title": ""
},
{
"docid": "d15804e98b58fa5ec0985c44f6bb6033",
"text": "Urrently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iterations output. We establish that a feedback based approach has several core advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback develops a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We provide a general feedback based learning architecture, instantiated using existing RNNs, with the endpoint results on par or better than existing feedforward networks and the addition of the above advantages.",
"title": ""
},
{
"docid": "8ea6c4957443916c2102f8a173f9d3dc",
"text": "INTRODUCTION\nOpioid overdose fatality has increased threefold since 1999. As a result, prescription drug overdose surpassed motor vehicle collision as the leading cause of unintentional injury-related death in the USA. Naloxone , an opioid antagonist that has been available for decades, can safely reverse opioid overdose if used promptly and correctly. However, clinicians often overestimate the dose of naloxone needed to achieve the desired clinical outcome, precipitating acute opioid withdrawal syndrome (OWS).\n\n\nAREAS COVERED\nThis article provides a comprehensive review of naloxone's pharmacologic properties and its clinical application to promote the safe use of naloxone in acute management of opioid intoxication and to mitigate the risk of precipitated OWS. Available clinical data on opioid-receptor kinetics that influence the reversal of opioid agonism by naloxone are discussed. Additionally, the legal and social barriers to take home naloxone programs are addressed.\n\n\nEXPERT OPINION\nNaloxone is an intrinsically safe drug, and may be administered in large doses with minimal clinical effect in non-opioid-dependent patients. However, when administered to opioid-dependent patients, naloxone can result in acute opioid withdrawal. Therefore, it is prudent to use low-dose naloxone (0.04 mg) with appropriate titration to reverse ventilatory depression in this population.",
"title": ""
},
{
"docid": "515e4ae8fabe93495d8072fe984d8bb6",
"text": "Most studies in statistical or machine learning based authorship attribution focus on two or a few authors. This leads to an overestimation of the importance of the features extracted from the training data and found to be discriminating for these small sets of authors. Most studies also use sizes of training data that are unrealistic for situations in which stylometry is applied (e.g., forensics), and thereby overestimate the accuracy of their approach in these situations. A more realistic interpretation of the task is as an authorship verification problem that we approximate by pooling data from many different authors as negative examples. In this paper, we show, on the basis of a new corpus with 145 authors, what the effect is of many authors on feature selection and learning, and show robustness of a memory-based learning approach in doing authorship attribution and verification with many authors and limited training data when compared to eager learning methods such as SVMs and maximum entropy learning.",
"title": ""
}
] |
scidocsrr
|
0db71867f1cbc8734dadd5d541cf4317
|
Enhancing Differential Evolution Utilizing Eigenvector-Based Crossover Operator
|
[
{
"docid": "3293e4e0d7dd2e29505db0af6fbb13d1",
"text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.",
"title": ""
}
] |
[
{
"docid": "43228a3436f23d786ad7faa7776f1e1b",
"text": "Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV) include Wegener granulomatosis, microscopic polyangiitis, Churg–Strauss syndrome and renal-limited vasculitis. This Review highlights the progress that has been made in our understanding of AAV pathogenesis and discusses new developments in the treatment of these diseases. Evidence from clinical studies, and both in vitro and in vivo experiments, supports a pathogenic role for ANCAs in the development of AAV; evidence is stronger for myeloperoxidase-ANCAs than for proteinase-3-ANCAs. Neutrophils, complement and effector T cells are also involved in AAV pathogenesis. With respect to treatment of AAV, glucocorticoids, cyclophosphamide and other conventional therapies are commonly used to induce remission in generalized disease. Pulse intravenous cyclophosphamide is equivalent in efficacy to oral cyclophosphamide but seems to be associated with less adverse effects. Nevertheless, alternatives to cyclophosphamide therapy have been investigated, such as the use of methotrexate as a less-toxic alternative to cyclophosphamide to induce remission in non-organ-threatening or non-life-threatening AAV. Furthermore, rituximab is equally as effective as cyclophosphamide for induction of remission in AAV and might become the standard of therapy in the near future. Controlled trials in which specific immune effector cells and molecules are being therapeutically targeted have been initiated or are currently being planned.",
"title": ""
},
{
"docid": "b85ca1bbd3d5224b0e10b2cda433fe8f",
"text": "We show that the Graph Isomorphism (GI) problem and its generalizations, the String Isomorphism (SI) and Coset Intersection (CI) problems, can be solved in quasipolynomial (exp ( (log n) ) ) time. The best previous bound for GI was exp(O( √ n log n)), where n is the number of vertices (Luks, 1983); for SI and CI, the bound was similar, exp(Õ( √ n)), where n is the size of the permutation domain (Babai, 1983). The SI problem takes as input two strings, x and y, of length n, and a permutation group G of degree n and asks if some element of G transforms x into y. Our algorithm builds on Luks’s SI framework and attacks its bottleneck, characterized by an epimorphism φ of G onto the alternating group acting on a set Γ of size k > c log n. Our goal is to break this symmetry. The crucial first step is to find a canonical t-ary relational structure on Γ, with not too much symmetry, for some t = O(log n). We say that an element x in the domain of G is affected by φ if φ maps the stabilizer of x to a proper subgroup of Ak. The affected/unaffected dichotomy provides a device to construct global symmetry from local information through the core group-theoretic “local certificates” routine. This algorithm in turn produces the required t-ary structure and thereby sets the stage for symmetry breaking via combinatorial methods of canonical partitioning. The latter lead to the emergence of the Johnson graphs as the sole obstructions to effective canonical partitioning. For a list of updates compared to the first two arXiv versions, see the Acknowledgments (Sec. 18.1). WARNING. While the present version fills significant gaps of the previous versions and improves the presentation of some components of the paper, the revision is incomplete; at the current stage, it includes notational, conceptual, and organizational inconsistencies. A fuller explanation of this disclaimer appears in the Acknowledgments (Sec. 18.1) at the end of the paper. ∗ Research supported in part by NSF Grants CCF-7443327 (2014-current), CCF-1017781 (2010-2014), and CCF-0830370 (2008–2010). Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the author and do not necessarily reflect the views of the National Science Foundation (NSF).",
"title": ""
},
{
"docid": "26162f0e3f6c8752a5dbf7174d2e5e44",
"text": "Literature on the combination of qualitative and quantitative research components at the primary empirical study level has recently accumulated exponentially. However, this combination is only rarely discussed and applied at the research synthesis level. The purpose of this paper is to explore the possible contribution of mixed methods research to the integration of qualitative and quantitative research at the synthesis level. In order to contribute to the methodology and utilization of mixed methods at the synthesis level, we present a framework to perform mixed methods research syntheses (MMRS). The presented classification framework can help to inform researchers intending to carry out MMRS, and to provide ideas for conceptualizing and developing those syntheses. We illustrate the use of this framework by applying it to the planning of MMRS on effectiveness studies concerning interventions for challenging behavior in persons with intellectual disabilities, presenting two hypothetical examples. Finally, we discuss possible strengths of MMRS and note some remaining challenges concerning the implementation of these syntheses.",
"title": ""
},
{
"docid": "4c2ab8f148d2e3136d4976b1b88184d5",
"text": "In ten years, more than half the world’s population will be living in cities. The United Ž . Nations UN has stated that this will threaten cities with social conflict, environmental degradation and the collapse of basic services. The economic, social, and environmental planning practices of societies embodying ‘urban sustainability’ have been proposed as antidotes to these negative urban trends. ‘Urban sustainability’ is a doctrine with diverse origins. The author believes that the alternative models of cultural development in Curitiba, Brazil, Kerala, India, and Nayarit, Mexico embody the integration and interlinkage of economic, social, and environmental sustainability. Curitiba has become a more livable city by building an efficient intra-urban bus system, expanding urban green space, and meeting the basic needs of the urban poor. Kerala has attained social harmony by emphasizing equitable resource distribution rather than consumption, by restraining reproduction, and by attacking divisions of race, caste, religion, and gender. Nayarit has sought to balance development with the environment by framing a nature-friendly development plan that protects natural systems from urban development and that involves the public in the development process. A detailed examination of these alternative cultural development models reveals a myriad of possible means by which economic, social, and environmental sustainability might be advanced in practice. The author concludes that while these examples from the developing world cannot be directly translated to cities in the developed world, they do indicate in a general sense the imaginative policies that any society must foster if it is to achieve ‘urban sustainability’.",
"title": ""
},
{
"docid": "cbf878cd5fbf898bdf88a2fcf5024826",
"text": "Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.",
"title": ""
},
{
"docid": "feb34f36aed8e030f93c0adfbe49ee8b",
"text": "Complex queries containing outer joins are, for the most part, executed by commercial DBMS products in an \"as written\" manner. Only a very few reorderings of the operations are considered and the benefits of considering comprehensive reordering schemes are not exploited. This is largely due to the fact there are no readily usable results for reordering such operations for relations with duplicates and/or outer join predicates that are other than \"simple.\" Most previous approaches have ignored duplicates and complex predicates; the very few that have considered these aspects have suggested approaches that lead to a possibly exponential number of, and redundant intermediate joins. Since traditional query graph models are inadequate for modeling outer join queries with complex predicates, we present the needed hypergraph abstraction and algorithms for reordering such queries with joins and outer joins. As a result, the query optimizer can explore a significantly larger space of execution plans, and choose one with a low cost. Further, these algorithms are easily incorporated into well known and widely used enumeration methods such as dynamic programming.",
"title": ""
},
{
"docid": "01288eefbf2bc0e8c9dc4b6e0c6d70e9",
"text": "The latest discoveries on diseases and their diagnosis/treatment are mostly disseminated in the form of scientific publications. However, with the rapid growth of the biomedical literature and a high level of variation and ambiguity in disease names, the task of retrieving disease-related articles becomes increasingly challenging using the traditional keywordbased approach. An important first step for any disease-related information extraction task in the biomedical literature is the disease mention recognition task. However, despite the strong interest, there has not been enough work done on disease name identification, perhaps because of the difficulty in obtaining adequate corpora. Towards this aim, we created a large-scale disease corpus consisting of 6900 disease mentions in 793 PubMed citations, derived from an earlier corpus. Our corpus contains rich annotations, was developed by a team of 12 annotators (two people per annotation) and covers all sentences in a PubMed abstract. Disease mentions are categorized into Specific Disease, Disease Class, Composite Mention and Modifier categories. When used as the gold standard data for a state-of-the-art machine-learning approach, significantly higher performance can be found on our corpus than the previous one. Such characteristics make this disease name corpus a valuable resource for mining disease-related information from biomedical text. The NCBI corpus is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Fe llows/Dogan/disease.html.",
"title": ""
},
{
"docid": "bc3f64571ac833049e95994c675df26a",
"text": "Effective Poisson–Nernst–Planck (PNP) equations are derived for ion transport in charged porous media under forced convection (periodic flow in the frame of the mean velocity) by an asymptotic multiscale expansion with drift. The homogenized equations provide a modeling framework for engineering while also addressing fundamental questions about electrodiffusion in charged porous media, relating to electroneutrality, tortuosity, ambipolar diffusion, Einstein’s relation, and hydrodynamic dispersion. The microscopic setting is a two-component periodic composite consisting of a dilute electrolyte continuum (described by standard PNP equations) and a continuous dielectric matrix, which is impermeable to the ions and carries a given surface charge. As a first approximation for forced convection, the electrostatic body force on the fluid and electro-osmotic flows are neglected. Four new features arise in the upscaled equations: (i) the effective ionic diffusivities and mobilities become tensors, related to the microstructure; (ii) the effective permittivity is also a tensor, depending on the electrolyte/matrix permittivity ratio and the ratio of the Debye screening length to the macroscopic length of the porous medium; (iii) the microscopic convection leads to a diffusion-dispersion correction in the effective diffusion tensor; and (iv) the surface charge per volume appears as a continuous “background charge density,” as in classical membrane models. The coefficient tensors in the upscaled PNP equations can be calculated from periodic reference cell problems. For an insulating solid matrix, all gradients are corrected by the same tensor, and the Einstein relation holds at the macroscopic scale, which is not generally the case for a polarizable matrix, unless the permittivity and electric field are suitably defined. In the limit of thin double layers, Poisson’s equation is replaced by macroscopic electroneutrality (balancing ionic and surface charges). The general form of the macroscopic PNP equations may also hold for concentrated solution theories, based on the local-density and mean-field approximations. These results have broad applicability to ion transport in porous electrodes, separators, membranes, ion-exchange resins, soils, porous rocks, and biological tissues.",
"title": ""
},
{
"docid": "e0580a51b7991f86559a7a3aa8b26204",
"text": "A new ultra-wideband monocycle pulse generator with good performance is designed and demonstrated. The pulse generator circuits employ SRD(step recovery diode), Schottky diode, and simple RC coupling and decoupling circuit, and are completely fabricated on the planar microstrip structure, which have the characteristic of low cost and small size. Through SRD modeling, the accuracy of the simulation is improved, which save the design period greatly. The generated monocycle pulse has the peak-to-peak amplitude 1.3V, pulse width 370ps and pulse repetition rate of 10MHz, whose waveform features are symmetric well and low ringing level. Good agreement between the measured and calculated results is achieved.",
"title": ""
},
{
"docid": "1a5ddde73f38ab9b2563540c36c222c0",
"text": "This paper presents a self-adaptive autonomous online learning through a general type-2 fuzzy system (GT2 FS) for the motor imagery (MI) decoding of a brain-machine interface (BMI) and navigation of a bipedal humanoid robot in a real experiment, using electroencephalography (EEG) brain recordings only. GT2 FSs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) the maximum number of EEG channels is limited and fixed; 2) no possibility of performing repeated user training sessions; and 3) desirable use of unsupervised and low-complexity feature extraction methods. The novel online learning method presented in this paper consists of a self-adaptive GT2 FS that can autonomously self-adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath–Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models), which are learnt in a continous (trial-by-trial) non-iterative basis. The effectiveness of the proposed method is demonstrated in a detailed BMI experiment, in which 15 untrained users were able to accurately interface with a humanoid robot, in a single session, using signals from six EEG electrodes only.",
"title": ""
},
{
"docid": "914c985dc02edd09f0ee27b75ecee6a4",
"text": "Whether the development of face recognition abilities truly reflects changes in how faces, specifically, are perceived, or rather can be attributed to more general perceptual or cognitive development, is debated. Event-related potential (ERP) recordings on the scalp offer promise for this issue because they allow brain responses to complex visual stimuli to be relatively well isolated from other sensory, cognitive and motor processes. ERP studies in 5- to 16-year-old children report large age-related changes in amplitude, latency (decreases) and topographical distribution of the early visual components, the P1 and the occipito-temporal N170. To test the face specificity of these effects, we recorded high-density ERPs to pictures of faces, cars, and their phase-scrambled versions from 72 children between the ages of 4 and 17, and a group of adults. We found that none of the previously reported age-dependent changes in amplitude, latency or topography of the P1 or N170 were specific to faces. Most importantly, when we controlled for age-related variations of the P1, the N170 appeared remarkably similar in amplitude and topography across development, with much smaller age-related decreases in latencies than previously reported. At all ages the N170 showed equivalent face-sensitivity: it had the same topography and right hemisphere dominance, it was absent for meaningless (scrambled) stimuli, and larger and earlier for faces than cars. The data also illustrate the large amount of inter-individual and inter-trial variance in young children's data, which causes the N170 to merge with a later component, the N250, in grand-averaged data. Based on our observations, we suggest that the previously reported \"bi-fid\" N170 of young children is in fact the N250. Overall, our data indicate that the electrophysiological markers of face-sensitive perceptual processes are present from 4 years of age and do not appear to change throughout development.",
"title": ""
},
{
"docid": "d0aa53919bbb869a2c033247e413fc72",
"text": "We describe and present a new Question Answering (QA) component that can be easily used by the QA research community. It can be used to answer questions over DBpedia and Wikidata. The language support over DBpedia is restricted to English, while it can be used to answer questions in 4 different languages over Wikidata namely English, French, German and Italian. Moreover it supports both full natural language queries as well as keyword queries. We describe the interfaces to access and reuse it and the services it can be combined with. Moreover we show the evaluation results we achieved on the QALD-7 benchmark.",
"title": ""
},
{
"docid": "df97ff54b80a096670c7771de1f49b6d",
"text": "In recent times, Bitcoin has gained special attention both from industry and academia. The underlying technology that enables Bitcoin (or more generally crypto-currency) is called blockchain. At the core of the blockchain technology is a data structure that keeps record of the transactions in the network. The special feature that distinguishes it from existing technology is its immutability of the stored records. To achieve immutability, it uses consensus and cryptographic mechanisms. As the data is stored in distributed nodes this technology is also termed as \"Distributed Ledger Technology (DLT)\". As many researchers and practitioners are joining the hype of blockchain, some of them are raising the question about the fundamental difference between blockchain and traditional database and its real value or potential. In this paper, we present a critical analysis of both technologies based on a survey of the research literature where blockchain solutions are applied to various scenarios. Based on this analysis, we further develop a decision tree diagram that will help both practitioners and researchers to choose the appropriate technology for their use cases. Using our proposed decision tree we evaluate a sample of the existing works to see to what extent the blockchain solutions have been used appropriately in the relevant problem domains.",
"title": ""
},
{
"docid": "fcf410fc492f3ddf80be9cb5351f7aed",
"text": "Unmanned Combat Aerial Vehicle (UCAV) research has allowed the state of the art of the remote-operation of these technologies to advance significantly in modern times, though mostly focusing on ground strike scenarios. Within the context of air-to-air combat, millisecond long timeframes for critical decisions inhibit remoteoperation of UCAVs. Beyond this, given an average human visual reaction time of 0.15 to 0.30 seconds, and an even longer time to think of optimal plans and coordinate them with friendly forces, there is a huge window of improvement that an Artificial Intelligence (AI) can capitalize upon. While many proponents for an increase in autonomous capabilities herald the ability to design aircraft that can perform extremely high-g maneuvers as well as the benefit of reducing risk to our pilots, this white paper will primarily focus on the increase in capabilities of real-time decision making.",
"title": ""
},
{
"docid": "f87a4ddb602d9218a0175a9e804c87c6",
"text": "We present a novel online audio-score alignment approach for multi-instrument polyphonic music. This approach uses a 2-dimensional state vector to model the underlying score position and tempo of each time frame of the audio performance. The process model is defined by dynamic equations to transition between states. Two representations of the observed audio frame are proposed, resulting in two observation models: a multi-pitch-based and a chroma-based. Particle filtering is used to infer the hidden states from observations. Experiments on 150 music pieces with polyphony from one to four show the proposed approach outperforms an existing offline global string alignment-based score alignment approach. Results also show that the multi-pitch-based observation model works better than the chroma-based one.",
"title": ""
},
{
"docid": "abbb210122d470215c5a1d0420d9db06",
"text": "Ensemble clustering, also known as consensus clustering, is emerging as a promising solution for multi-source and/or heterogeneous data clustering. The co-association matrix based method, which redefines the ensemble clustering problem as a classical graph partition problem, is a landmark method in this area. Nevertheless, the relatively high time and space complexity preclude it from real-life large-scale data clustering. We therefore propose SEC, an efficient Spectral Ensemble Clustering method based on co-association matrix. We show that SEC has theoretical equivalence to weighted K-means clustering and results in vastly reduced algorithmic complexity. We then derive the latent consensus function of SEC, which to our best knowledge is among the first to bridge co-association matrix based method to the methods with explicit object functions. The robustness and generalizability of SEC are then investigated to prove the superiority of SEC in theory. We finally extend SEC to meet the challenge rising from incomplete basic partitions, based on which a scheme for big data clustering can be formed. Experimental results on various real-world data sets demonstrate that SEC is an effective and efficient competitor to some state-of-the-art ensemble clustering methods and is also suitable for big data clustering.",
"title": ""
},
{
"docid": "45cff09810b8741d8be1010aa6ff3000",
"text": "This paper discusses experience in applying time harmonic three-dimensional (3D) finite element (FE) analysis in analyzing an axial-flux (AF) solid-rotor induction motor (IM). The motor is a single rotor - single stator AF IM. The construction presented in this paper has not been analyzed before in any technical documents. The field analysis and the comparison of torque calculation results of the 3D calculations with measured torque results are presented",
"title": ""
},
{
"docid": "2c1689a9a6d257f9e2ce8f33a1e30cb9",
"text": "This study examined the use of neural word embeddings for clinical abbreviation disambiguation, a special case of word sense disambiguation (WSD). We investigated three different methods for deriving word embeddings from a large unlabeled clinical corpus: one existing method called Surrounding based embedding feature (SBE), and two newly developed methods: Left-Right surrounding based embedding feature (LR_SBE) and MAX surrounding based embedding feature (MAX_SBE). We then added these word embeddings as additional features to a Support Vector Machines (SVM) based WSD system. Evaluation using the clinical abbreviation datasets from both the Vanderbilt University and the University of Minnesota showed that neural word embedding features improved the performance of the SVMbased clinical abbreviation disambiguation system. More specifically, the new MAX_SBE method outperformed the other two methods and achieved the state-of-the-art performance on both clinical abbreviation datasets.",
"title": ""
},
{
"docid": "f51583c6eb5a0d6e27823e0714d40ef5",
"text": "Studies of emotion regulation typically contrast two or more strategies (e.g., reappraisal vs. suppression) and ignore variation within each strategy. To address such variation, we focused on cognitive reappraisal and considered the effects of goals (i.e., what people are trying to achieve) and tactics (i.e., what people actually do) on outcomes (i.e., how affective responses change). To examine goals, we randomly assigned participants to either increase positive emotion or decrease negative emotion to a negative stimulus. To examine tactics, we categorized participants' reports of how they reappraised. To examine reappraisal outcomes, we measured experience and electrodermal responding. Findings indicated that (a) the goal of increasing positive emotion led to greater increases in positive affect and smaller decreases in skin conductance than the goal of decreasing negative emotion, and (b) use of the reality challenge tactic was associated with smaller increases in positive affect during reappraisal. These findings suggest that reappraisal can be implemented in the service of different emotion goals, using different tactics. Such differences are associated with different outcomes, and they should be considered in future research and applied attempts to maximize reappraisal success.",
"title": ""
}
] |
scidocsrr
|
75d374514cbb6dc2a302a5403bba9501
|
A methodology for community detection in Twitter
|
[
{
"docid": "c0c7752c6b9416e281c3649e70f9daae",
"text": "Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.",
"title": ""
}
] |
[
{
"docid": "35da724255bbceb859d01ccaa0dec3b1",
"text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.",
"title": ""
},
{
"docid": "4fb0803aa12b7dfb2b3661822ea67c2b",
"text": "In this paper we present a broad overview of the last 40 years of research on cognitive architectures. Although the number of existing architectures is nearing several hundred, most of the existing surveys do not reflect this growth and focus on a handful of well-established architectures. Thus, in this survey we wanted to shift the focus towards a more inclusive and high-level overview of the research on cognitive architectures. Our final set of 85 architectures includes 49 that are still actively developed, and borrow from a diverse set of disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we discuss only the core cognitive abilities, such as perception, attention mechanisms, action selection, memory, learning and reasoning. In order to assess the breadth of practical applications of cognitive architectures we gathered information on over 900 practical projects implemented using the cognitive architectures in our list. We use various visualization techniques to highlight overall trends in the development of the field. In addition to summarizing the current state-of-the-art in the cognitive architecture research, this survey describes a variety of methods and ideas that have been tried and their relative success in modeling human cognitive abilities, as well as which aspects of cognitive behavior need more research with respect to their mechanistic counterparts and thus can further inform how cognitive science might progress.",
"title": ""
},
{
"docid": "46d3cec76fc52fb7141fc6d999931d6e",
"text": "Numerous studies suggest that infants delivered by cesarean section are at a greater risk of non-communicable diseases than their vaginal counterparts. In particular, epidemiological studies have linked Cesarean delivery with increased rates of asthma, allergies, autoimmune disorders, and obesity. Mode of delivery has also been associated with differences in the infant microbiome. It has been suggested that these differences are attributable to the \"bacterial baptism\" of vaginal birth, which is bypassed in cesarean deliveries, and that the abnormal establishment of the early-life microbiome is the mediator of later-life adverse outcomes observed in cesarean delivered infants. This has led to the increasingly popular practice of \"vaginal seeding\": the iatrogenic transfer of vaginal microbiota to the neonate to promote establishment of a \"normal\" infant microbiome. In this review, we summarize and critically appraise the current evidence for a causal association between Cesarean delivery and neonatal dysbiosis. We suggest that, while Cesarean delivery is certainly associated with alterations in the infant microbiome, the lack of exposure to vaginal microbiota is unlikely to be a major contributing factor. Instead, it is likely that indication for Cesarean delivery, intrapartum antibiotic administration, absence of labor, differences in breastfeeding behaviors, maternal obesity, and gestational age are major drivers of the Cesarean delivery microbial phenotype. We, therefore, call into question the rationale for \"vaginal seeding\" and support calls for the halting of this practice until robust evidence of need, efficacy, and safety is available.",
"title": ""
},
{
"docid": "a0a2037d04dd0e2b0defa8fbfd3072a4",
"text": "The sequential parameter optimization (spot) package for R (R Development Core Team, 2008) is a toolbox for tuning and understanding simulation and optimization algorithms. Model-based investigations are common approaches in simulation and optimization. Sequential parameter optimization has been developed, because there is a strong need for sound statistical analysis of simulation and optimization algorithms. spot includes methods for tuning based on classical regression and analysis of variance techniques; tree-based models such as CART and random forest; Gaussian process models (Kriging) and combinations of different metamodeling approaches. This article exemplifies how spot can be used for automatic and interactive tuning.",
"title": ""
},
{
"docid": "493c45304bd5b7dd1142ace56e94e421",
"text": "While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information.",
"title": ""
},
{
"docid": "0ce0eda3b12e71163c44d649f35f424c",
"text": "In the light of the identified problem, the primary objective of this study was to investigate the perceived role of strategic leadership in strategy implementation in South African organisations. The conclusion is that strategic leadership positively contributes to effective strategy implementation in South African organisations.",
"title": ""
},
{
"docid": "c5380f25f7b3005e8cbfceba9bb4bfa0",
"text": "We propose an event-driven model for headline generation. Given an input document, the system identifies a key event chain by extracting a set of structural events that describe them. Then a novel multi-sentence compression algorithm is used to fuse the extracted events, generating a headline for the document. Our model can be viewed as a novel combination of extractive and abstractive headline generation, combining the advantages of both methods using event structures. Standard evaluation shows that our model achieves the best performance compared with previous state-of-the-art systems.",
"title": ""
},
{
"docid": "0d56726137067006ecc7da870e489b1d",
"text": "We explore the use of residual networks for argumentation mining, with an emphasis on link prediction. We propose a method of general applicability, that does not rely on domain knowledge such as document or argument structure. We evaluate our method on a challenging dataset consisting of usergenerated comments collected from an online platform. Results show that our model outperforms an equivalent deep network and offers results comparable with state-of-the-art methods that rely on domain knowledge.",
"title": ""
},
{
"docid": "1f1e56149f33f57c9c227922f6a33ff5",
"text": "Modern OS kernels including Windows, Linux, and Mac OS all have adopted kernel Address Space Layout Randomization (ASLR), which shifts the base address of kernel code and data into different locations in different runs. Consequently, when performing introspection or forensic analysis of kernel memory, we cannot use any pre-determined addresses to interpret the kernel events. Instead, we must derandomize the address space layout and use the new addresses. However, few efforts have been made to derandomize the kernel address space and yet there are many questions left such as which approach is more efficient and robust. Therefore, we present the first systematic study of how to derandomize a kernel when given a memory snapshot of a running kernel instance. Unlike the derandomization approaches used in traditional memory exploits in which only remote access is available, with introspection and forensics applications, we can use all the information available in kernel memory to generate signatures and derandomize the ASLR. In other words, there exists a large volume of solutions for this problem. As such, in this paper we examine a number of typical approaches to generate strong signatures from both kernel code and data based on the insight of how kernel code and data is updated, and compare them from efficiency (in terms of simplicity, speed etc.) and robustness (e.g., whether the approach is hard to be evaded or forged) perspective. In particular, we have designed four approaches including brute-force code scanning, patched code signature generation, unpatched code signature generation, and read-only pointer based approach, according to the intrinsic behavior of kernel code and data with respect to kernel ASLR. We have gained encouraging results for each of these approaches and the corresponding experimental results are reported in this paper.",
"title": ""
},
{
"docid": "75b654084c7205b209d41a33b9bc03b9",
"text": "The aims of the study were to evaluate the per- and post-operative complications and outcomes after cystocele repair with transobturator mesh. A retrospective continuous series study was conducted over a period of 3 years. Clinical evaluation was up to 1 year with additional telephonic interview performed after 34 months on average. When stress urinary incontinence (SUI) was associated with the cystocele, it was treated with the same mesh. One hundred twenty-three patients were treated for cystocele. Per-operative complications occurred in six patients. After 1 year, erosion rate was 6.5%, and only three cystoceles recurred. After treatment of SUI with the same mesh, 87.7% restored continence. Overall patient’s satisfaction rate was 93.5%. Treatment of cystocele using transobturator four arms mesh appears to reduce the risk of recurrence at 1 year, along with high rate of patient’s satisfaction. The transobturator path of the prosthesis arms seems devoid of serious per- and post-operative risks and allows restoring continence when SUI is present.",
"title": ""
},
{
"docid": "d247f00420b872fb0153a343d2b44dd3",
"text": "Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes. As a result, conventional network embedding techniques cannot work on such HINs. Recently, metapathbased approaches have been proposed to characterize relationships in HINs, but they are ineffective in capturing rich contexts and semantics between nodes for embedding learning, mainly because (1) metapath is a rather strict single path node-node relationship descriptor, which is unable to accommodate variance in relationships, and (2) only a small portion of paths can match the metapath, resulting in sparse context information for embedding learning. In this paper, we advocate a new metagraph concept to capture richer structural contexts and semantics between distant nodes. A metagraph contains multiple paths between nodes, each describing one type of relationships, so the augmentation of multiple metapaths provides an effective way to capture rich contexts and semantic relations between nodes. This greatly boosts the ability of metapath-based embedding techniques in handling very sparse HINs. We propose a new embedding learning algorithm, namely MetaGraph2Vec, which uses metagraph to guide the generation of random walks and to learn latent embeddings of multi-typed HIN nodes. Experimental results show that MetaGraph2Vec is able to outperform the state-of-theart baselines in various heterogeneous network mining tasks such as node classification, node clustering, and similarity search.",
"title": ""
},
{
"docid": "ec6fd0bc7f59bdf865b4383a247b984f",
"text": "This paper proposes a novel technique to forecast day-ahead electricity prices based on the wavelet transform and ARIMA models. The historical and usually ill-behaved price series is decomposed using the wavelet transform in a set of better-behaved constitutive series. Then, the future values of these constitutive series are forecast using properly fitted ARIMA models. In turn, the ARIMA forecasts allow, through the inverse wavelet transform, reconstructing the future behavior of the price series and therefore to forecast prices. Results from the electricity market of mainland Spain in year 2002 are reported.",
"title": ""
},
{
"docid": "108afda5990ac3219e1479ac68e0daca",
"text": "Localization and navigation technology is one of the key technologies for Automated Guided Vehicle (AGV). In this paper we develop an AGV navigation system utilizing commercial Bluetooth GPS. WGS-84 coordinates system is converted to space rectangular coordinate system, which is combined with orientation information, initial position and gesture of AGV can be determined. Experimental result shows that the localization accuracy can meet the demand.",
"title": ""
},
{
"docid": "0a57e8bf656311b682b3657bcd6141b9",
"text": "We present in this paper a set of design patterns we have mined in the area of Voice User Interfaces (VUI). In a previous paper [14], we introduced several patterns regarding fundamental issues of developing a voice application. In this paper we explore further aspects concerning the internal structure of an audio interface, the construction of the interaction style, the system response architecture, and implementation strategies to meet the demands of real world scenarios.",
"title": ""
},
{
"docid": "6f2dbfcce622454579c607bf7a8a2797",
"text": "A new 3D graphics and multimedia hardware architecture, cod named Talisman, is described which exploits both spatial and temporal coherence to reduce the cost of high quality animatio Individually animated objects are rendered into independent image layers which are composited together at video refresh ra to create the final display. During the compositing process, a fu affine transformation is applied to the layers to allow translatio rotation, scaling and skew to be used to simulate 3D motion of objects, thus providing a multiplier on 3D rendering performan and exploiting temporal image coherence. Image compression broadly exploited for textures and image layers to reduce imag capacity and bandwidth requirements. Performance rivaling hi end 3D graphics workstations can be achieved at a cost point two to three hundred dollars.",
"title": ""
},
{
"docid": "0745755e5347c370cdfbeca44dc6d288",
"text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.",
"title": ""
},
{
"docid": "6a62d36b37f70fda2a516c0f7fe50e32",
"text": "We improve the quality of statistical machine translation (SMT) by applying models that predict word forms from their stems using extensive morphological and syntactic information from both the source and target languages. Our inflection generation models are trained independently of the SMT system. We investigate different ways of combining the inflection prediction component with the SMT system by training the base MT system on fully inflected forms or on word stems. We applied our inflection generation models in translating English into two morphologically complex languages, Russian and Arabic, and show that our model improves the quality of SMT over both phrasal and syntax-based SMT systems according to BLEU and human judgements.",
"title": ""
},
{
"docid": "d3d58715498167d3fbf863b9f6423fcd",
"text": "In this paper, we focus on online detection and isolation of erroneous values reported by medical wireless sensors. We propose a lightweight approach for online anomaly detection in collected data, able to raise alarms only when patients enter in emergency situation and to discard faulty measurements. The proposed approach is based on Haar wavelet decomposition and Hampel filter for spatial analysis, and on boxplot for temporal analysis. Our objective is to reduce false alarms resulted from unreliable measurements. We apply our proposed approach on real physiological data set. Our experimental results prove the effectiveness of our approach to achieve good detection accuracy with low false alarm rate.",
"title": ""
},
{
"docid": "8dc0240302d467a14148fbbf98eaade3",
"text": "Observed variation between populations in fertility-timing distributions has been thought to contribute to infant mortality differentials. This hypothesis is based, in part, on the belief that the 20s through early 30s constitute \"prime\" childbearing ages that are low-risk relative to younger or older ages. However, when stratified by racial identification over the predominant first child-bearing ages, maternal age patterns of neonatal mortality vary between groups. Unlike non-Hispanic white infants, African-American infants with teen mothers experience a survival advantage relative to infants whose mothers are older. The black-white infant mortality differential is larger at older maternal ages than at younger ages. While African Americans and non-Hispanic whites differ on which maternal ages are associated with the lowest risk of neonatal mortality, within each population, first births are most frequent at its lowest-risk maternal ages. As a possible explanation for racial variation in maternal age patterns of births and birth outcomes, the \"weathering hypothesis\" is proposed: namely, that the health of African-American women may begin to deteriorate in early adulthood as a physical consequence of cumulative socioeconomic disadvantage.",
"title": ""
}
] |
scidocsrr
|
5aa9c799f837b4ed1613908bf4b58dc9
|
Active Transfer Learning
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "d699b6516696077a7caefd72a1c57bd1",
"text": "In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the iedb MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non convex methods dedicated to the same problem. ∗To whom correspondance should be addressed: 35, rue Saint Honoré, F-77300 Fontainebleau, France.",
"title": ""
}
] |
[
{
"docid": "af9c94a8d4dcf1122f70f5d0b90a247f",
"text": "New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today's large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.",
"title": ""
},
{
"docid": "2679d251d413adf208cb8b764ce55468",
"text": "We compare variations of string comparators based on the Jaro-Winkler comparator and edit distance comparator. We apply the comparators to Census data to see which are better classifiers for matches and nonmatches, first by comparing their classification abilities using a ROC curve based analysis, then by considering a direct comparison between two candidate comparators in record linkage results.",
"title": ""
},
{
"docid": "3bca3446ce76b1f1560e037e4041a1de",
"text": "PURPOSE\nThe aim was to describe the demographic and clinical data of 116 consecutive cases of ocular dermoids.\n\n\nMETHODS\nThis was a retrospective case series and a review of clinical records of all the patients diagnosed with ocular dermoids. Both demographic and clinical data were recorded. Statistical analysis was performed with SPSS v. 18. Descriptive statistics are reported.\n\n\nRESULTS\nThe study included 116 consecutive patients with diagnosis consistent with ocular dermoids: corneal 18% (21), dermolipomas 38% (44), and orbital 44% (51). Sixty-five percent (71) were female, and 46% (54) were detected at birth. Secondary manifestations: amblyopia was present in 14% (3), and strabismus was detected in 6.8% (8). The Goldenhar syndrome was the most frequent syndromic entity in 7.5% (12) of the patients. Surgical resection was required on 49% (25) of orbital dermoids, 24% (5) of corneal dermoids, and 13% (6) of dermolipomas.\n\n\nCONCLUSIONS\nOrbital dermoids were the most frequent variety, followed by conjunctival and corneal. In contrast to other reports, corneal dermoids were significantly more prevalent in women. Goldenhar syndrome was the most frequent syndromatic entity.",
"title": ""
},
{
"docid": "ef04be9f32e3fbc7fc5f7ccb14e734c5",
"text": "We report theoretical simulation of a novel silver subwavelength grating with reflectivity > 99.5%, substantially higher than uniform thin film, and a wide 99%-reflectivity bandwidth of 190nm, promising for VCSELs and surface-normal optoelectronic devices.",
"title": ""
},
{
"docid": "fa94ee5e70d030270f317093b852a4e1",
"text": "This paper studies how home wireless performance characteristics affect the performance of user traffic in real homes. Previous studies have focused either on wireless metrics exclusively, without connection to the performance of user traffic; or on the performance of the home network at higher layers. In contrast, we deploy a passive measurement tool on commodity access points to correlate wireless performance metrics with TCP performance of user traffic. We implement our measurement tool, deploy it on commodity routers in 66 homes for one month, and study the relationship between wireless metrics and TCP performance of user traffic. We find that, most of the time, TCP flows from devices in the home achieve only a small fraction of available access link throughput; as the throughput of user traffic approaches the access link throughput, the characteristics of the home wireless network more directly affect performance. We also find that the 5 GHz band offers users better performance better than the 2.4 GHz band, and although the performance of devices varies within the same home, many homes do not have multiple devices sending high traffic volumes, implying that certain types of wireless contention may be uncommon in practice.",
"title": ""
},
{
"docid": "62de6de8b92e4bba6ee947cd475363ee",
"text": "In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.",
"title": ""
},
{
"docid": "421ab26a36eb4f9d97dfb323e394fa38",
"text": "Dual-system approaches to psychology explain the fundamental properties of human judgment, decision making, and behavior across diverse domains. Yet, the appropriate characterization of each system is a source of debate. For instance, a large body of research on moral psychology makes use of the contrast between \"emotional\" and \"rational/cognitive\" processes, yet even the chief proponents of this division recognize its shortcomings. Largely independently, research in the computational neurosciences has identified a broad division between two algorithms for learning and choice derived from formal models of reinforcement learning. One assigns value to actions intrinsically based on past experience, while another derives representations of value from an internally represented causal model of the world. This division between action- and outcome-based value representation provides an ideal framework for a dual-system theory in the moral domain.",
"title": ""
},
{
"docid": "aa4bad972cb53de2e60fd998df08d774",
"text": "170 undergraduate students completed the Boredom Proneness Scale by Farmer and Sundberg and the Multiple Affect Adjective Checklist by Zuckerman and Lubin. Significant negative relationships were found between boredom proneness and negative affect scores (i.e., Depression, Hostility, Anxiety). Significant positive correlations also obtained between boredom proneness and positive affect (i.e., Positive Affect, Sensation Seeking). The correlations between boredom proneness \"subscales\" and positive and negative affect were congruent with those obtained using total boredom proneness scores. Implications for counseling are discussed.",
"title": ""
},
{
"docid": "a6426f7c52e0666744c4ec2760cc0046",
"text": "Growing concern about diet and health has led to development of healthier food products. In general consumer perception towards the intake of meat and meat products is unhealthy because it may increase the risk of diseases like cardiovascular diseases, obesity and cancer, because of its high fat content (especially saturated fat) and added synthetic antioxidants and antimicrobials. Addition of plant derivatives having antioxidant components including vitamins A, C and E, minerals, polyphenols, flavanoids and terpenoids in meat products may decrease the risk of several degenerative diseases. To change consumer attitudes towards meat consumption, the meat industry is undergoing major transformations by addition of nonmeat ingredients as animal fat replacers, natural antioxidants and antimicrobials, preferably derived from plant sources.",
"title": ""
},
{
"docid": "1cbd70bddd09be198f6695209786438d",
"text": "In this research work a neural network based technique to be applied on condition monitoring and diagnosis of rotating machines equipped with hydrostatic self levitating bearing system is presented. Based on fluid measured data, such pressures and temperature, vibration analysis based diagnosis is being carried out by determining the vibration characteristics of the rotating machines on the basis of signal processing tasks. Required signals are achieved by conversion of measured data (fluid temperature and pressures) into virtual data (vibration magnitudes) by means of neural network functional approximation techniques.",
"title": ""
},
{
"docid": "77df82cf7a9ddca2038433fa96a43cef",
"text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "81243e721527e74f0997d6aeb250cc23",
"text": "This paper compares the attributes of 36 slot, 33 slot and 12 slot brushless interior permanent magnet motor designs, each with an identical 10 pole interior magnet rotor. The aim of the paper is to quantify the trade-offs between alternative distributed and concentrated winding configurations taking into account aspects such as thermal performance, field weakening behaviour, acoustic noise, and efficiency. It is found that the concentrated 12 slot design gives the highest theoretical performance however significant rotor losses are found during testing and a large amount of acoustic noise and vibration is generated. The 33 slot design is found to have marginally better performance than the 36 slot but it also generates some unbalanced magnetic pull on the rotor which may lead to mechanical issues at higher speeds.",
"title": ""
},
{
"docid": "6150e19bffad5629c6d5cb7439663b13",
"text": "We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classiication of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axis-parallel. Allowing a set of such hyperplanes to form the boundaries of the decision regions leads to a signiicant reduction in the number of rules generated while maintaining the accuracy rates of the networks. We describe the components of NeuroLinear in detail by way of two examples using artiicial datasets. Our experimental results on real-world datasets show that the system is eeective in extracting compact and comprehensible rules with high predictive accuracy from neural networks.",
"title": ""
},
{
"docid": "a3e7a0cd6c0e79dee289c5b31c3dac76",
"text": "Silicone is one of the most widely used filler for facial cosmetic correction and soft tissue augmentation. Although initially it was considered to be a biologically inert material, many local and generalized adverse effects have been reported after silicone usage for cosmetic purposes. We present a previously healthy woman who developed progressive and persistent generalized livedo reticularis after cosmetic surgery for volume augmentation of buttocks. Histopathologic study demonstrated dermal presence of interstitial vacuoles and cystic spaces of different sizes between the collagen bundles, which corresponded to the silicone particles implanted years ago. These vacuoles were clustered around vascular spaces and surrounded by a few foamy macrophages. General examination and laboratory investigations failed to show any evidence of connective tissue disease or other systemic disorder. Therefore, we believe that the silicone implanted may have induced some kind of blood dermal perturbation resulting in the characteristic violet reticular discoloration of livedo reticularis.",
"title": ""
},
{
"docid": "404f1c68c097c74b120189af67bf00f5",
"text": "In 1991, a novel robot, MIT-MANUS, was introduced to study the potential that robots might assist in and quantify the neuro-rehabilitation of motor function. MIT-MANUS proved an excellent tool for shoulder and elbow rehabilitation in stroke patients, showing in clinical trials a reduction of impairment in movements confined to the exercised joints. This successful proof of principle as to additional targeted and intensive movement treatment prompted a test of robot training examining other limb segments. This paper focuses on a robot for wrist rehabilitation designed to provide three rotational degrees-of-freedom. The first clinical trial of the device will enroll 200 stroke survivors. Ultimately 160 stroke survivors will train with both the proximal shoulder and elbow MIT-MANUS robot, as well as with the novel distal wrist robot, in addition to 40 stroke survivor controls. So far 52 stroke patients have completed the robot training (ongoing protocol). Here, we report on the initial results on 36 of these volunteers. These results demonstrate that further improvement should be expected by adding additional training to other limb segments.",
"title": ""
},
{
"docid": "5dad217551cbbb7ba8476467c3469c6d",
"text": "This letter presents a semi-automatic approach to delineating road networks from very high resolution satellite images. The proposed method consists of three main steps. First, the geodesic method is used to extract the initial road segments that link the road seed points prescribed in advance by users. Next, a road probability map is produced based on these coarse road segments and a further direct thresholding operation separates the image into two classes of surfaces: the road and nonroad classes. Using the road class image, a kernel density estimation map is generated, upon which the geodesic method is used once again to link the foregoing road seed points. Experiments demonstrate that this proposed method can extract smooth correct road centerlines.",
"title": ""
},
{
"docid": "07d27c28ae8c87e9132939bc6ca540d3",
"text": "SRAM bitcell design margin continues to shrink due to random and systematic process variation in scaled technologies and conventional SRAM faces a challenge in realizing the power and density benefits of technology scaling. Smart and adaptive assist circuits can improve design margins while satisfying SRAM power and performance requirements in scaled technologies. This paper introduces an adaptive, dynamic SRAM word-line under-drive (ADWLUD) scheme that uses a bitcell-based sensor to dynamically optimize the strength of WLUD for each die. The ADWLUD sensor enables 130 mV reduction in SRAM Vccmin while increasing frequency yield by 9% over conventional SRAM without WLUD. The sensor area overhead is limited to 0.02% and power overhead is 2% for a 3.4 Mb SRAM array.",
"title": ""
},
{
"docid": "3d0b507f18dca7e2710eab5fdaa9a20b",
"text": "This paper is designed to illustrate and consider the relations between three types of metarepresentational ability used in verbal comprehension: the ability to metarepresent attributed thoughts, the ability to metarepresent attributed utterances, and the ability to metarepresent abstract, non-attributed representations (e.g. sentence types, utterance types, propositions). Aspects of these abilities have been separ at ly considered in the literatures on “theory of mind”, Gricean pragmatics and quotation. The aim of this paper is to show how the results of these separate strands of research might be integrated with an empirically plausible pragmatic theory.",
"title": ""
},
{
"docid": "ba026cfe4c08b22b67fbbbf31c7a57d3",
"text": "End-to-end relation extraction refers to identifying boundaries of entity mentions, entity types of these mentions and appropriate semantic relation for each pair of mentions. Traditionally, separate predictive models were trained for each of these tasks and were used in a “pipeline” fashion where output of one model is fed as input to another. But it was observed that addressing some of these tasks jointly results in better performance. We propose a single, joint neural network based model to carry out all the three tasks of boundary identification, entity type classification and relation type classification. This model is referred to as “All Word Pairs” model (AWP-NN) as it assigns an appropriate label to each word pair in a given sentence for performing end-to-end relation extraction. We also propose to refine output of the AWP-NN model by using inference in Markov Logic Networks (MLN) so that additional domain knowledge can be effectively incorporated. We demonstrate effectiveness of our approach by achieving better end-to-end relation extraction performance than all 4 previous joint modelling approaches, on the standard dataset of ACE 2004.",
"title": ""
},
{
"docid": "57c91bce931a23501f42772c103d15c1",
"text": "Faceted browsing is widely used in Web shops and product comparison sites. In these cases, a fixed ordered list of facets is often employed. This approach suffers from two main issues. First, one needs to invest a significant amount of time to devise an effective list. Second, with a fixed list of facets, it can happen that a facet becomes useless if all products that match the query are associated to that particular facet. In this work, we present a framework for dynamic facet ordering in e-commerce. Based on measures for specificity and dispersion of facet values, the fully automated algorithm ranks those properties and facets on top that lead to a quick drill-down for any possible target product. In contrast to existing solutions, the framework addresses e-commerce specific aspects, such as the possibility of multiple clicks, the grouping of facets by their corresponding properties, and the abundance of numeric facets. In a large-scale simulation and user study, our approach was, in general, favorably compared to a facet list created by domain experts, a greedy approach as baseline, and a state-of-the-art entropy-based solution.",
"title": ""
}
] |
scidocsrr
|
a4659ee1eebbca7ff3aa6d2d355de099
|
When Positive Sentiment Is Not So Positive : Textual Analytics and Bank Failures
|
[
{
"docid": "3f5097b33aab695678caca712b649a8f",
"text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.",
"title": ""
},
{
"docid": "f10724859d8982be426891e0d5c44629",
"text": "This paper empirically examines how capital affects a bank’s performance (survival and market share) and how this effect varies across banking crises, market crises, and normal times that occurred in the US over the past quarter century. We have two main results. First, capital helps small banks to increase their probability of survival and market share at all times (during banking crises, market crises, and normal times). Second, capital enhances the performance of medium and large banks primarily during banking crises. Additional tests explore channels through which capital generates these effects. Numerous robustness checks and additional tests are performed. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1991322dce13ee81885f12322c0e0f79",
"text": "The quality of the interpretation of the sentiment in the online buzz in the social media and the online news can determine the predictability of financial markets and cause huge gains or losses. That is why a number of researchers have turned their full attention to the different aspects of this problem lately. However, there is no well-rounded theoretical and technical framework for approaching the problem to the best of our knowledge. We believe the existing lack of such clarity on the topic is due to its interdisciplinary nature that involves at its core both behavioral-economic topics as well as artificial intelligence. We dive deeper into the interdisciplinary nature and contribute to the formation of a clear frame of discussion. We review the related works that are about market prediction based on onlinetext-mining and produce a picture of the generic components that they all have. We, furthermore, compare each system with the rest and identify their main differentiating factors. Our comparative analysis of the systems expands onto the theoretical and technical foundations behind each. This work should help the research community to structure this emerging field and identify the exact aspects which require further research and are of special significance. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "da74e402f4542b6cbfb27f04c7640eb4",
"text": "Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.",
"title": ""
},
{
"docid": "96d6173f58e36039577c8e94329861b2",
"text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.",
"title": ""
},
{
"docid": "be7e30d4ebae196b9cdde7b5d6f79951",
"text": "This paper introduces a new quadrotor manipulation system that consists of a 2-link manipulator attached to the bottom of a quadrotor. This new system presents a solution for the drawbacks found in the current quadrotor manipulation system which uses a gripper fixed to a quadrotor. Unlike the current system, the proposed system enables the end-effector to achieve any arbitrary orientation and thus increases its degrees of freedom from 4 to 6. Also, it provides enough distance between the quadrotor and the object to be manipulated. This is useful in some applications such as demining applications. System kinematics and dynamics are derived which are highly nonlinear. Controller is designed based on feedback linearization to track desired trajectories. Controlling the movements in the horizontal directions is simplified by utilizing the derived nonholonmic constraints. Finally, the proposed system is simulated using MATLAB/SIMULINK program. The simulation results show the effectiveness of the proposed controller.",
"title": ""
},
{
"docid": "070a1de608a35cddb69b84d5f081e94d",
"text": "Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.",
"title": ""
},
{
"docid": "5e5e2d038ae29b4c79c79abe3d20ae40",
"text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ef2aa8cc707ab30782f3e4e133db1337",
"text": "Recurrent neural networks (RNNs), especially long shortterm memory (LSTM) RNNs, are effective network for sequential task like speech recognition. Deeper LSTM models perform well on large vocabulary continuous speech recognition, because of their impressive learning ability. However, it is more difficult to train a deeper network. We introduce a training framework with layer-wise training and exponential moving average methods for deeper LSTM models. It is a competitive framework that LSTM models of more than 7 layers are successfully trained on Shenma voice search data in Mandarin and they outperform the deep LSTM models trained by conventional approach. Moreover, in order for online streaming speech recognition applications, the shallow model with low real time factor is distilled from the very deep model. The recognition accuracy have little loss in the distillation process. Therefore, the model trained with the proposed training framework reduces relative 14% character error rate, compared to original model which has the similar real-time capability. Furthermore, the novel transfer learning strategy with segmental Minimum Bayes-Risk is also introduced in the framework. The strategy makes it possible that training with only a small part of dataset could outperform full dataset training from the beginning.",
"title": ""
},
{
"docid": "612416cb82559f94d8d4b888bad17ba1",
"text": "Future plastic materials will be very different from those that are used today. The increasing importance of sustainability promotes the development of bio-based and biodegradable polymers, sometimes misleadingly referred to as 'bioplastics'. Because both terms imply \"green\" sources and \"clean\" removal, this paper aims at critically discussing the sometimes-conflicting terminology as well as renewable sources with a special focus on the degradation of these polymers in natural environments. With regard to the former we review innovations in feedstock development (e.g. microalgae and food wastes). In terms of the latter, we highlight the effects that polymer structure, additives, and environmental variables have on plastic biodegradability. We argue that the 'biodegradable' end-product does not necessarily degrade once emitted to the environment because chemical additives used to make them fit for purpose will increase the longevity. In the future, this trend may continue as the plastics industry also is expected to be a major user of nanocomposites. Overall, there is a need to assess the performance of polymer innovations in terms of their biodegradability especially under realistic waste management and environmental conditions, to avoid the unwanted release of plastic degradation products in receiving environments.",
"title": ""
},
{
"docid": "1ef2e54d021f9d149600f0bc7bebb0cd",
"text": "The field of open-domain conversation generation using deep neural networks has attracted increasing attention from researchers for several years. However, traditional neural language models tend to generate safe, generic reply with poor logic and no emotion. In this paper, an emotional conversation generation orientated syntactically constrained bidirectional-asynchronous framework called E-SCBA is proposed to generate meaningful (logical and emotional) reply. In E-SCBA, pre-generated emotion keyword and topic keyword are asynchronously introduced into the reply during the generation, and the process of decoding is much different from the most existing methods that generates reply from the first word to the end. A newly designed bidirectional-asynchronous decoder with the multi-stage strategy is proposed to support this idea, which ensures the fluency and grammaticality of reply by making full use of syntactic constraint. Through the experiments, the results show that our framework not only improves the diversity of replies, but gains a boost on both logic and emotion compared with baselines as well.",
"title": ""
},
{
"docid": "c101290e355e76df7581a4500c111c86",
"text": "The Internet of Things (IoT) is a network of physical things, objects, or devices, such as radio-frequency identification tags, sensors, actuators, mobile phones, and laptops. The IoT enables objects to be sensed and controlled remotely across existing network infrastructure, including the Internet, thereby creating opportunities for more direct integration of the physical world into the cyber world. The IoT becomes an instance of cyberphysical systems (CPSs) with the incorporation of sensors and actuators in IoT devices. Objects in the IoT have the potential to be grouped into geographical or logical clusters. Various IoT clusters generate huge amounts of data from diverse locations, which creates the need to process these data more efficiently. Efficient processing of these data can involve a combination of different computation models, such as in situ processing and offloading to surrogate devices and cloud-data centers.",
"title": ""
},
{
"docid": "45c13af41bc3d1b5ba5ea678f9b2eb6f",
"text": "A new type of mobile robots with the inch worm mechanism is presented in this paper for inspecting pipelines from the outside of pipe surfaces under hostile environments. This robot, Mark 111, is made after the successful investigation of the prototypes, Mark I and 11, which can pass over obstacles on pipelines, such as flanges and T-joints and others. Newly developed robot, Mark 111, can move vertically along the pipeline and move to the adjacent pipeline for the inspection. The sensors, infra ray proximity sensor and ultra sonic sensors and others, are installed to detect these obstacles and can move autonomously controlled by the microprocessor. The control method of this robot can be carried out by the dual control mode proposed in this paper.",
"title": ""
},
{
"docid": "9eed972925d4a3e805ea53a04208f43f",
"text": "This article examines the practice of electronics building in the context of other crafts. We compare the experience of making electronics with the experiences of carving, sewing, and painting. Our investigation is grounded in a survey of 40 practicing craftspeople who are working in each of these disciplines. We then use this survey as a foundation for a discussion of hybrid craft—integrations of electronics with carving, sewing, and painting. We present examples of hybrid craft and discuss the ways in which blended practices can enrich and diversify technology.",
"title": ""
},
{
"docid": "a377b31c0cb702c058f577ca9c3c5237",
"text": "Problem statement: Extensive research efforts in the area of Natural L anguage Processing (NLP) were focused on developing reading comprehens ion Question Answering systems (QA) for Latin based languages such as, English, French and German . Approach: However, little effort was directed towards the development of such systems for bidirec tional languages such as Arabic, Urdu and Farsi. In general, QA systems are more sophisticated and more complex than Search Engines (SE) because they seek a specific and somewhat exact answer to the query. Results: Existing Arabic QA system including the most recent described excluded one or both types of questions (How and Why) from their work because of the difficulty of handling these questions. In this study, we present a new approach and a new questio nanswering system (QArabPro) for reading comprehensi on texts in Arabic. The overall accuracy of our system is 84%. Conclusion/Recommendations: These results are promising compared to existing systems. Our system handles all types of questions including (How and why).",
"title": ""
},
{
"docid": "6a2584657154d6c9fd0976c30469349a",
"text": "A major challenge for managers in turbulent environments is to make sound decisions quickly. Dynamic capabilities have been proposed as a means for addressing turbulent environments by helping managers extend, modify, and reconfigure existing operational capabilities into new ones that better match the environment. However, because dynamic capabilities have been viewed as an elusive black box, it is difficult for managers to make sound decisions in turbulent environments if they cannot effectively measure dynamic capabilities. Therefore, we first seek to propose a measurable model of dynamic capabilities by conceptualizing, operationalizing, and measuring dynamic capabilities. Specifically, drawing upon the dynamic capabilities literature, we identify a set of capabilities—sensing the environment, learning, coordinating, and integrating— that help reconfigure existing operational capabilities into new ones that better match the environment. Second, we propose a structural model where dynamic capabilities influence performance by reconfiguring existing operational capabilities in the context of new product development (NPD). Data from 180 NPD units support both the measurable model of dynamic capabilities and also the structural model by which dynamic capabilities influence performance in NPD by reconfiguring operational capabilities, particularly in higher levels of environmental turbulence. The study’s implications for managerial decision making in turbulent environments by capturing the elusive black box of dynamic capabilities are discussed. Subject Areas: Decision Making in Turbulent Environments, Dynamic Capabilities, Environmental Turbulence, New Product Development, and Operational Capabilities.",
"title": ""
},
{
"docid": "29e1ecb7b1dfbf4ca2a229726dcab12e",
"text": "The recently developed depth sensors, e.g., the Kinect sensor, have provided new opportunities for human-computer interaction (HCI). Although great progress has been made by leveraging the Kinect sensor, e.g., in human body tracking, face recognition and human action recognition, robust hand gesture recognition remains an open problem. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This paper focuses on building a robust part-based hand gesture recognition system using Kinect sensor. To handle the noisy hand shapes obtained from the Kinect sensor, we propose a novel distance metric, Finger-Earth Mover's Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The extensive experiments demonstrate that our hand gesture recognition system is accurate (a 93.2% mean accuracy on a challenging 10-gesture dataset), efficient (average 0.0750 s per frame), robust to hand articulations, distortions and orientation or scale changes, and can work in uncontrolled environments (cluttered backgrounds and lighting conditions). The superiority of our system is further demonstrated in two real-life HCI applications.",
"title": ""
},
{
"docid": "9e3a7ae57f7faf984bdf8559e7e49850",
"text": "In the late 1960s Brazil was experiencing a boom in its television and record industries, as part of the so-called “Economic Miracle” (1968 74) brought about by the military dictatorship’s opening up of the market to international capital. Censorship was introduced more or less simultaneously and responded in part to the military’s recognition of the potential power of the audio-visual media in a country in which over half of the population was illiterate or semi-literate. After the 1964 coup and until the infamous 5 Institutional Act (AI-5), introduced in 1968 to silence opposition to the regime, the left wing cultural production that had characterised the period under the government of the deposed populist president, João Goulart, had continued to flourish. Until 1968, the military had largely left the cultural scene alone to face up to the failure of its revolutionary political and cultural projects. Instead the generals focused on the brutal repression of student, trade union and grassroots activists who had collaborated with the cultural left, thus effectively depriving these artists of their public. Chico Buarque, one of the most censored performers of the period, maintains that at this moment he was saved from retreating into an introspective formalism in his songs and musical dramas by the emergence in 1965 of the televised music festivals, which became one of the most talked about events in the country (Buarque, 1979, 48). Sponsored by the television stations, which were themselves closely monitored and regulated by the government, the festivals still provided oppositional songwriters with an opportunity to re-",
"title": ""
},
{
"docid": "4a6b9f883c954b4a0ef5b450211567c2",
"text": "PURPOSE\nThe aim of this study was to evaluate the effect of preheating composite resins used as luting agents for indirect restorations on microtensile bond strength (µTBS) and adhesive interfaces.\n\n\nMATERIAL AND METHODS\nFifty sound extracted third molars were used. Ten experimental groups were formed with three different luting agents: one resin cement (RelyX ARC) and two composite resins (Venus and Z250 XT). The composite resins were tested both at room temperature and when preheated to 64°C. Restoration depth was tested using 2 or 4 mm-height indirect composite resin restorations, previously made on cylindrical molds. Adhesive and luting procedures were done under simulated pulpal pressure. After luting, the teeth were sectioned into beams with a cross-sectional area of 1 mm2 at the bonded interface, and tested in tension at 0.5 mm/min. The characteristics of the adhesive interfaces were observed under scanning electron microscopy (SEM). The µTBS data were analyzed using ANOVA and the Tukey test (α = 0.05).\n\n\nRESULTS\nWhen luting 2 mm restorations, the composite resin Z250 XT, preheated or at room temperature, achieved significantly higher µTBS than did RelyX ARC. At this depth, Venus did not differ from the resin cement, and with the 4 mm restorations, only preheated Venus presented significantly higher µTBS than RelyX ARC. Preheating the composite resin resulted in thinner luting interfaces, with a more intimate interaction between luting agent and adhesive layer.\n\n\nCONCLUSION\nPreheating composite resin for luting procedures may not improve µTBS, although it could be used to reduce material viscosity and improve restoration setting.",
"title": ""
},
{
"docid": "9888ef3aefca1049307ecd49ea5a3a49",
"text": "We live in a \"small world,\" where two arbitrary people are likely connected by a short chain of intermediate friends. With scant information about a target individual, people can successively forward a message along such a chain. Experimental studies have verified this property in real social networks, and theoretical models have been advanced to explain it. However, existing theoretical models have not been shown to capture behavior in real-world social networks. Here, we introduce a richer model relating geography and social-network friendship, in which the probability of befriending a particular person is inversely proportional to the number of closer people. In a large social network, we show that one-third of the friendships are independent of geography and the remainder exhibit the proposed relationship. Further, we prove analytically that short chains can be discovered in every network exhibiting the relationship.",
"title": ""
},
{
"docid": "031571ac48fad3adec0e781986003630",
"text": "BACKGROUND\nSubjective reports of insomnia and hypersomnia are common in bipolar disorder (BD). It is unclear to what extent these relate to underlying circadian rhythm disturbance (CRD). In this study we aimed to objectively assess sleep and circadian rhythm in a cohort of patients with BD compared to matched controls.\n\n\nMETHOD\nForty-six patients with BD and 42 controls had comprehensive sleep/circadian rhythm assessment with respiratory sleep studies, prolonged accelerometry over 3 weeks, sleep questionnaires and diaries, melatonin levels, alongside mood, psychosocial functioning and quality of life (QoL) questionnaires.\n\n\nRESULTS\nTwenty-three (50%) patients with BD had abnormal sleep, of whom 12 (52%) had CRD and 29% had obstructive sleep apnoea. Patients with abnormal sleep had lower 24-h melatonin secretion compared to controls and patients with normal sleep. Abnormal sleep/CRD in BD was associated with impaired functioning and worse QoL.\n\n\nCONCLUSIONS\nBD is associated with high rates of abnormal sleep and CRD. The association between these disorders, mood and functioning, and the direction of causality, warrants further investigation.",
"title": ""
},
{
"docid": "2935e97887092bbe1a02ba6031e66968",
"text": "This paper studies the cross-sectional properties of return forecasts derived from Fama-MacBeth regressions. These forecasts mimic how an investor could, in real time, combine many firm characteristics to obtain a composite estimate of a stock’s expected return. Empirically, the forecasts vary substantially across stocks and have strong predictive power for actual returns. For example, using ten-year rolling estimates of FamaMacBeth slopes and a cross-sectional model with 15 firm characteristics (all based on low-frequency data), the expected-return estimates have a cross-sectional standard deviation of 0.87% monthly and a predictive slope for future monthly returns of 0.74, with a standard error of 0.07.",
"title": ""
}
] |
scidocsrr
|
7dd591b32159f4be0c666e32796642aa
|
GamePad: A Learning Environment for Theorem Proving
|
[
{
"docid": "cc7033023e1c5a902dfa10c8346565c4",
"text": "Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays, and uninterpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in various software verification and analysis applications.",
"title": ""
},
{
"docid": "cd8c1c24d4996217c8927be18c48488f",
"text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.",
"title": ""
},
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
}
] |
[
{
"docid": "5a4315e5887bdbb6562e76b54d03beeb",
"text": "A combination of conventional cross sectional process and device simulations combined with top down and 3D device simulations have been used to design and optimise the integration of a 100V Lateral DMOS (LDMOS) device for high side bridge applications. This combined simulation approach can streamline the device design process and gain important information about end effects which are lost from 2D cross sectional simulations. Design solutions to negate detrimental end effects are proposed and optimised by top down and 3D simulations and subsequently proven on tested silicon.",
"title": ""
},
{
"docid": "47f9724fd9dc25eda991854074ac0afa",
"text": "This paper reviews the state of the art in piezoelectric energy harvesting. It presents the basics of piezoelectricity and discusses materials choice. The work places emphasis on material operating modes and device configurations, from resonant to non-resonant devices and also to rotational solutions. The reviewed literature is compared based on power density and bandwidth. Lastly, the question of power conversion is addressed by reviewing various circuit solutions.",
"title": ""
},
{
"docid": "b6715e3ee8b2876b479522c03c1d674a",
"text": "Normalizing for atmospheric and land surface bidirectional reflectance distribution function (BRDF) effects is essential in satellite data processing. It is important both for a single scene when the combination of land covers, sun, and view angles create anisotropy and for multiple scenes in which the sun angle changes. As a consequence, it is important for inter-sensor calibration and comparison. Procedures based on physics-based models have been applied successfully with the Moderate Resolution Imaging Spectroradiometer (MODIS) data. For Landsat and other higher resolution data, similar options exist. However, the estimation of BRDF models using internal fitting is not available due to the smaller variation of view and solar angles and infrequent revisits. In this paper, we explore the potential for developing operational procedures to correct Landsat data using coupled physics-based atmospheric and BRDF models. The process was realized using BRDF shape functions derived from MODIS with the MODTRAN 4 radiative transfer model. The atmospheric and BRDF correction algorithm was tested for reflectance factor estimation using Landsat data for two sites with different land covers in Australia. The Landsat reflectance values had a good agreement with ground based spectroradiometer measurements. In addition, overlapping images from adjacent paths in Queensland, Australia, were also used to validate the BRDF correction. The results clearly show that the algorithm can remove most of the BRDF effect without empirical adjustment. The comparison between normalized Landsat and MODIS reflectance factor also shows a good relationship, indicating that cross calibration between the two sensors is achievable.",
"title": ""
},
{
"docid": "78b6d4935256010742bc67491935374d",
"text": "Technology has enabled us to imagine beyond our working capacities and think of solutions that can replace the monotonous work with automated machines and systems. This research paper is aimed at making the parking system agile, robust and more convenient for people. Albeit, several parking solutions are available, this system integrates all problems into one single idea that can be permanently embedded as a solution. The system will incorporate different modules like parking availability calculation, proximity estimation and payment service. The system will also guide the vehicle owners to navigate through the parking lot. Moreover, an analysis will be conducted to examine the benefits of the current project and how it can be improved.",
"title": ""
},
{
"docid": "71aae4cbccf6d3451d35528ceca8b8a9",
"text": "We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.",
"title": ""
},
{
"docid": "08dbd88adb399721e0f5ee91534c9888",
"text": "Many theories of attention have proposed that visual working memory plays an important role in visual search tasks. The present study examined the involvement of visual working memory in search using a dual-task paradigm in which participants performed a visual search task while maintaining no, two, or four objects in visual working memory. The presence of a working memory load added a constant delay to the visual search reaction times, irrespective of the number of items in the visual search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the search array, indicating that the search process itself was not slowed by the memory load. Moreover, the search task did not substantially impair the maintenance of information in visual working memory. These results suggest that visual search requires minimal visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.",
"title": ""
},
{
"docid": "ab56aa5fc6fe6557c2be28056cfb660e",
"text": "Autophagy is an evolutionarily ancient mechanism that ensures the lysosomal degradation of old, supernumerary or ectopic cytoplasmic entities. Most eukaryotic cells, including neurons, rely on proficient autophagic responses for the maintenance of homeostasis in response to stress. Accordingly, autophagy mediates neuroprotective effects following some forms of acute brain damage, including methamphetamine intoxication, spinal cord injury and subarachnoid haemorrhage. In some other circumstances, however, the autophagic machinery precipitates a peculiar form of cell death (known as autosis) that contributes to the aetiology of other types of acute brain damage, such as neonatal asphyxia. Here, we dissect the context-specific impact of autophagy on non-infectious acute brain injury, emphasizing the possible therapeutic application of pharmacological activators and inhibitors of this catabolic process for neuroprotection.",
"title": ""
},
{
"docid": "40099678d2c97013eb986d3be93eefb4",
"text": "Mortality prediction of intensive care unit (ICU) patients facilitates hospital benchmarking and has the opportunity to provide caregivers with useful summaries of patient health at the bedside. The development of novel models for mortality prediction is a popular task in machine learning, with researchers typically seeking to maximize measures such as the area under the receiver operator characteristic curve (AUROC). The number of ’researcher degrees of freedom’ that contribute to the performance of a model, however, presents a challenge when seeking to compare reported performance of such models. In this study, we review publications that have reported performance of mortality prediction models based on the Medical Information Mart for Intensive Care (MIMIC) database and attempt to reproduce the cohorts used in their studies. We then compare the performance reported in the studies against gradient boosting and logistic regression models using a simple set of features extracted from MIMIC. We demonstrate the large heterogeneity in studies that purport to conduct the single task of ’mortality prediction’, highlighting the need for improvements in the way that prediction tasks are reported to enable fairer comparison between models. We reproduced datasets for 38 experiments corresponding to 28 published studies using MIMIC. In half of the experiments, the sample size we acquired was 25% greater or smaller than the sample size reported. The highest discrepancy was 11,767 patients. While accurate reproduction of each study cannot be guaranteed, we believe that these results highlight the need for more consistent reporting of model design and methodology to allow performance improvements to be compared. We discuss the challenges in reproducing the cohorts used in the studies, highlighting the importance of clearly reported methods (e.g. data cleansing, variable selection, cohort selection) and the need for open code and publicly available benchmarks.",
"title": ""
},
{
"docid": "e72872277a33dcf6d5c1f7e31f68a632",
"text": "Tilt rotor unmanned aerial vehicle (TRUAV) with ability of hovering and high-speed cruise has attached much attention, but its transition control is still a difficult point because of varying dynamics. This paper proposes a multi-model adaptive control (MMAC) method for a quad-TRUAV, and the stability in the transition procedure could be ensured by considering corresponding dynamics. For safe transition, tilt corridor is considered firstly, and actual flight status should locate within it. Then, the MMAC controller is constructed according to mode probabilities, which are calculated by solving a quadratic programming problem based on a set of input- output plant models. Compared with typical gain scheduling control, this method could ensure transition stability more effectively.",
"title": ""
},
{
"docid": "680be905a0f01e26e608ba7b4b79a94e",
"text": "A cost-effective position measurement system based on optical mouse sensors is presented in this work. The system is intended to be used in a planar positioning stage for microscopy applications and as such, has strict resolution, accuracy, repeatability, and sensitivity requirements. Three techniques which improve the measurement system's performance in the context of these requirements are proposed; namely, an optical magnification of the image projected onto the mouse sensor, a periodic homing procedure to reset the error buildup, and a compensation of the undesired dynamics caused by filters implemented in the mouse sensor chip.",
"title": ""
},
{
"docid": "99c25c7e8dfbdffb5949fc00730cbe15",
"text": "The vegetation outlook (VegOut) is a geospatial tool for predicting general vegetation condition patterns across large areas. VegOut predicts a standardized seasonal greenness (SSG) measure, which represents a general indicator of relative vegetation health. VegOut predicts SSG values at multiple time steps (two to six weeks into the future) based on the analysis of “historical patterns” (i.e., patterns at each 1 km grid cell and time of the year) of satellite, climate, and oceanic data over an 18-year period (1989 to 2006). The model underlying VegOut capitalizes on historical climate–vegetation interactions and ocean–climate teleconnections (such as El Niño and the Southern Oscillation, ENSO) expressed over the 18-year data record and also considers several environmental characteristics (e.g., land use/cover type and soils) that influence vegetation’s response to weather conditions to produce 1 km maps that depict future general vegetation conditions. VegOut provides regionallevel vegetation monitoring capabilities with local-scale information (e.g., county to sub-county level) that can complement more traditional remote sensing–based approaches that monitor “current” vegetation conditions. In this paper, the VegOut approach is discussed and a case study over the central United States for selected periods of the 2008 growing season is presented to demonstrate the potential of this new tool for assessing and predicting vegetation conditions.",
"title": ""
},
{
"docid": "30260d1a4a936c79e6911e1e91c3a84a",
"text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.",
"title": ""
},
{
"docid": "aec23c23dfb209513fe804a2558cd087",
"text": "In recent years, STT-RAMs have been proposed as a promising replacement for SRAMs in on-chip caches. Although STT-RAMs benefit from high-density, non-volatility, and low-power characteristics, high rates of read disturbances and write failures are the major reliability problems in STTRAM caches. These disturbance/failure rates are directly affected not only by workload behaviors, but also by process variations. Several studies characterized the reliability of STTRAM caches just for one cell, but vulnerability of STT-RAM caches cannot be directly derived from these models. This paper extrapolates the reliability characteristics of one STTRAM cell presented in previous studies to the vulnerability analysis of STT-RAM caches. To this end, we propose a highlevel framework to investigate the vulnerability of STT-RAM caches affected by the per-cell disturbance/failure rates as well as the workloads behaviors and process variations. This framework is an augmentation of gem5 simulator. The investigation reveals that: 1) the read disturbance rate in a cache varies by 6 orders of magnitude for different workloads, 2) the write failure rate varies by 4 orders of magnitude for different workloads, and 3) the process variations increase the read disturbance and write failure rates by up to 5.8x and 8.9x, respectively.",
"title": ""
},
{
"docid": "d5a882ecc0c78ee4c8456adb21914af4",
"text": "Radiologists routinely examine medical images such as XRay, CT, or MRI and write reports summarizing their descriptive findings and conclusive impressions. A computer-aided radiology report generation system can lighten the workload for radiologists considerably and assist them in decision making. Although the rapid development of deep learning technology makes the generation of a single conclusive sentence possible, results produced by existing methods are not sufficiently reliable due to the complexity of medical images. Furthermore, generating detailed paragraph descriptions for medical images remains a challenging problem. To tackle this problem, we propose a novel generative model which generates a complete radiology report automatically. The proposed model incorporates the Convolutional Neural Networks (CNNs) with the Long Short-Term Memory (LSTM) in a recurrent way. It is capable of not only generating high-level conclusive impressions, but also generating detailed descriptive findings sentence by sentence to support the conclusion. Furthermore, our multimodal model combines the encoding of the image and one generated sentence to construct an attention input to guide the generation of the next sentence, and henceforth maintains coherence among generated sentences. Experimental results on the publicly available Indiana U. Chest X-rays from the Open-i image collection show that our proposed recurrent attention model achieves significant improvements over baseline models according to multiple evaluation metrics.",
"title": ""
},
{
"docid": "bd7f3decfe769db61f0577a60e39a26f",
"text": "Automated food and drink recognition methods connect to cloud-based lookup databases (e.g., food item barcodes, previously identified food images, or previously classified NIR (Near Infrared) spectra of food and drink items databases) to match and identify a scanned food or drink item, and report the results back to the user. However, these methods remain of limited value if we cannot further reason with the identified food and drink items, ingredients and quantities/portion sizes in a proposed meal in various contexts; i.e., understand from a semantic perspective their types, properties, and interrelationships in the context of a given user’s health condition and preferences. In this paper, we review a number of “food ontologies”, such as the Food Products Ontology/FOODpedia (by Kolchin and Zamula), Open Food Facts (by Gigandet et al.), FoodWiki (Ontology-driven Mobile Safe Food Consumption System by Celik), FOODS-Diabetes Edition (A Food-Oriented Ontology-Driven System by Snae Namahoot and Bruckner), and AGROVOC multilingual agricultural thesaurus (by the UN Food and Agriculture Organization—FAO). These food ontologies, with appropriate modifications (or as a basis, to be added to and further OPEN ACCESS Future Internet 2015, 7 373 expanded) and together with other relevant non-food ontologies (e.g., about diet-sensitive disease conditions), can supplement the aforementioned lookup databases to enable progression from the mere automated identification of food and drinks in our meals to a more useful application whereby we can automatically reason with the identified food and drink items and their details (quantities and ingredients/bromatological composition) in order to better assist users in making the correct, healthy food and drink choices for their particular health condition, age, body weight/BMI (Body Mass Index), lifestyle and preferences, etc.",
"title": ""
},
{
"docid": "544333c99f2b28e37702306bfe6521d4",
"text": "Faced with unsustainable costs and enormous amounts of under-utilized data, health care needs more efficient practices, research, and tools to harness the full benefits of personal health and healthcare-related data. Imagine visiting your physician’s office with a list of concerns and questions. What if you could walk out the office with a personalized assessment of your health? What if you could have personalized disease management and wellness plan? These are the goals and vision of the work discussed in this paper. The timing is right for such a research direction—given the changes in health care, reimbursement, reform, meaningful use of electronic health care data, and patient-centered outcome mandate. We present the foundations of work that takes a Big Data driven approach towards personalized healthcare, and demonstrate its applicability to patient-centered outcomes, meaningful use, and reducing re-admission rates.",
"title": ""
},
{
"docid": "86617458af24278fa2b69b544dc0f09e",
"text": "Recent research on learning in work situations has focussed on concepts such as ‘productive learning’ and ‘pedagogy of vocational learning’. In investigating what makes learning productive and what pedagogies enhance this, there is a tendency to take the notion of learning as unproblematic. This paper argues that much writing on workplace learning is strongly shaped by peoples’ understandings of learning in formal educational situations. Such assumptions distort attempts to understand learning at work. The main focus of this paper is to problematise the concept of ‘learning’ and to identify the implications of this for attempts to understand learning at work and the conditions that enhance it. An alternative conception of learning that promises to do more justice to the richness of learning at work is presented and discussed. For several years now, the adult and vocational learning research group at University of Technology, Sydney, (now known as OVAL Research1), has been pursuing a systematic research agenda centred on issues about learning at work (e.g. Boud & Garrick 1999, Symes & McIntyre 2000, Beckett & Hager 2002). The OVAL research group’s two most recent seminar series have been focussed on ‘productive learning’ and ‘pedagogy of vocational learning’. Both of these topics reflect a concern with conditions that enhance rich learning in work situations. In attempting, however, to characterise what makes learning productive and what pedagogies enhance this, there may be a tendency to take the notion of learning as unproblematic. I have elsewhere argued that common understandings of learning uncritically incorporate assumptions that derive from previous formal learning experiences (Hager forthcoming). Likewise Elkjaer (2003) has recently pointed out how much writing on workplace learning is strongly shaped by the authors’ understandings of learning in formal educational situations. The main focus of this paper is to problematise the concept of ‘learning’ and to identify the implications of this for attempts to understand learning at work and the conditions that enhance it. A key claim is that government policies that impact significantly on learning at work commonly treat learning as a product, i.e. as the acquisition of discrete items of knowledge or skill. The argument is that these policies thereby obstruct attempts to develop satisfactory understandings of learning at work. 1 The Australian Centre for Organisational, Vocational and Adult Learning Research. (For details see www.oval.uts.edu.au) Problematising the Concept of Learning Although learning is still widely treated as an unproblematic concept in educational writings, there is growing evidence that its meaning increasingly is being contested. For instance Brown & Palincsar (1989, p. 394) observed: “Learning is a term with more meanings that there are theorists”. Schoenfeld (1999, p. 6) noted “....that the very definition of learning is contested, and that assumptions that people make regarding its nature and where it takes place also vary widely.” According to Winch “.....the possibility of giving a scientific or even a systematic account of human learning is ..... mistaken” (1998, p. 2). His argument is that there are many and diverse cases of learning, each subject to “constraints in a variety of contexts and cultures” which precludes them from being treated in a general way (1998, p. 85). He concludes that “... grand theories of learning .... are underpinned ... invariably ... by faulty epistemological premises” (Winch, 1998, p. 183). Not only is the concept of learning disputed amongst theorists, it seems that even those with the greatest claims to practical knowledge of learning may be deficient in their understanding. Those bastions of learning, higher education institutions can trace their origins back into the mists of time. If anyone knows from experience what learning is it should be them. Yet the recent cyber learning debacle suggests otherwise. Many of the world’s most illustrious universities have invested many millions of dollars setting up suites of online courses in the expectation of making large profits from offcampus students. According to Brabazon (2002), these initiatives have manifestly failed since prospective students were not prepared to pay the fees. Many of these online courses are now available free as a backup resource for on-campus students. Brabazon’s analysis is that these university ‘experts’ on learning have confused technology with teaching and tools with learning. The staggering sums of money mis-invested in online education certainly shows that universities may not be the experts in learning that they think they are. We can take Brabazon’s analysis a step further. The reason why tools were confused with learning, I argue, is that learning is not a well understood concept at the start of the 21st century. Perhaps it is in a similar position to the concept of motion at the end of the middle ages. Of course, motion is one of the central concepts in physics, just as learning is a central concept in education, and the social sciences generally. For a long time, understanding of motion was limited by adherence to the Aristotelian attempt to provide a single account of all motion. Aristotle proposed a second-order distinction between natural and violent motions. It was the ‘nature’ of all terrestrial bodies to have a natural motion towards the centre of the universe (the centre of the earth); but bodies were also subject to violent motions in any direction imparted by disruptive, external, ‘non-natural’ causes. So the idea was to privilege one kind of motion as basic and to account for others in terms of non-natural disruptions to this natural motion. The Aristotelian account persisted for so long because it was in accord with ‘common sense’ ideas on motion. Everyone was familiar with motion and thought that they understood it. Likewise, everyone has experienced formal schooling and this shapes how they understand learning. Thus, the type of learning that is familiar to everyone gains privileged status. The worth of other kinds of learning is judged by how well they approximate the favoured kind (Beckett & Hager 2002, section 6.1). The dominance of this concept of learning is also evident in educational thought, where there has been a major focus on learning in formal education settings. This dominant view of learning also fits well with ‘folk’ conceptions of the mind (Bereiter 2002). Real progress in understanding motion came when physicists departed from ‘common sense’ ideas and recognised that there are many different types of motion – falling, projectile, pendulum, wave, etc. each requiring their own account. Likewise, it seems there are many types of learning and things that can be learnt – propositions, skills, behaviours, attitudes, etc. Efforts to understand these may well require a range of theories each with somewhat different assumptions. The Monolithic Influence of Viewing Learning as a Product There is currently a dominant view of learning that is akin to the Aristotelian view of motion in its pervasive influence. It provides an account of supposedly the best kind of learning, and all cases of learning are judged by how well they fit this view. This dominant view of learning – the ‘common sense’ account – views the mind as a ‘container’ and ‘knowledge as a type of substance’ (Lakoff & Johnson 1980). Under the influence of the mind-as-container metaphor, knowledge is treated as consisting of objects contained in individual minds, something like the contents of mental filing cabinets. (Bereiter 2002, p. 179) Thus there is a focus on ‘adding more substance’ to the mind. This is the ‘folk theory’ of learning (e.g. Bereiter 2002). It emphasises the products of learning. At this stage it might be objected that the educationally sophisticated have long ago moved beyond viewing learning as a product. Certainly, as shown later in this paper, the educational arguments for an alternative view have been persuasive for quite some time now. Nevertheless, much educational policy and practice, including policies and practices that directly impact on the emerging interest in learning at work, are clearly rooted in the learning as product view. For instance, typical policy documents relating to CompetencyBased Training view work performance as a series of decontextualised atomic elements, which novice workers are thought of as needing to pick up one by one. Once a discrete element is acquired, transfer or application to appropriate future circumstances by the learner is assumed to be unproblematic. This is a pure learning as product approach. Similarly, policy documents on generic skills (core or basic skills) typically reflect similar assumptions. Putative generic skills, such as communication and problem solving, are presented as discrete, decontextualised elements that, once acquired, can simply be transferred to diverse situations. Certainly, in literature emanating from employer groups, this assumption is endemic. These, then, are two policy areas that are closely linked to learning at work that are dominated by learning as product assumptions. Of course, Lyotard (1984) and other postmodern writers (e.g. Usher & Edwards 1994) have argued that the recent neo-liberal marketisation of education results in a commodification of knowledge, in which knowledge is equated with information. Such information can, for instance, be readily stored and transmitted via microelectronic technology. Students become consumers of educational commodities. All of this is grist to the learning as product mill. However, it needs to be emphasised that learning as product was the dominant mindset long before the rise of neo-liberal marketisation of education. This is reflected in standard international educational nomenclature: acquisition of content, transfer of learning, delivery of courses, course providers, course offerings, course load, ",
"title": ""
},
{
"docid": "162f080444935117c5125ae8b7c3d51e",
"text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1",
"title": ""
},
{
"docid": "ab9d4f991cf6fa1c6ecf4f2a7573cff1",
"text": "Over the last decade, much research has been conducted in the field of human resource management (HRM) and its associations with firm performance. Prior studies have found substantial positive evidence for statistical associations between HRM practices and improved firm performance. The purpose of this study is to investigate the relationships between HRM practices and firm performance with business strategy and environmental uncertainty as moderators. This study examines the relationships among HRM practices, environmental uncertainty, business strategy and firm performance. It was hypothesized that HRM practices could positively influenced profitability and growth and negatively influenced employee turnover. Data were collected using mail questionnaire sent to human resource managers in manufacturing firms in Malaysia. A total of 162 useable responses were obtained and used for the purpose of analysis. Results of hierarchical regression used to test the relationships among the variables indicated that (1) human resource planning has a relationship with profitability and growth; (2) performance-based pay has a relationship with profitability and growth; (3) skills development has a relationship with involuntary employee turnover; (4) environmental uncertainty as a moderator influence the relationship between human resource planning, performance-based pay and profitability; and (5) business strategy as a moderator influences the relationship between performance-based pay and growth. The findings can form the basis for useful recommendations for Malaysian managers in encouraging the practice of human resource management and for employees who are",
"title": ""
}
] |
scidocsrr
|
910063110cc07ecad68ee0586ad2a2c4
|
High-inductive short-circuit Type IV in multi-level converter protection schemes
|
[
{
"docid": "5cc929181c4a8ab7538b7bfc68015cf9",
"text": "The IGBT can run into different short-circuit types (SC I, SC II, SC III). Especially in SC II and III, an interaction between the gate drive unit and the IGBT takes place. A self-turn-off mechanism after short-circuit turn on can occur. Parasitic elements in the connection between the IGBT and the gate unit as well as asymmetrical wiring of devices connected in parallel are of effect to the short-circuit capability. In high-voltage IGBTs, filament formation can occur at short-circuit condition. Destructive measurements with its failure patterns and short-circuit protection methods are shown.",
"title": ""
}
] |
[
{
"docid": "5369b1f53fe492e07eaafe8979fc6a31",
"text": "MOTIVATION\nDNA microarray experiments generating thousands of gene expression measurements, are being used to gather information from tissue and cell samples regarding gene expression differences that will be useful in diagnosing disease. We have developed a new method to analyse this kind of data using support vector machines (SVMs). This analysis consists of both classification of the tissue samples, and an exploration of the data for mis-labeled or questionable tissue results.\n\n\nRESULTS\nWe demonstrate the method in detail on samples consisting of ovarian cancer tissues, normal ovarian tissues, and other normal tissues. The dataset consists of expression experiment results for 97,802 cDNAs for each tissue. As a result of computational analysis, a tissue sample is discovered and confirmed to be wrongly labeled. Upon correction of this mistake and the removal of an outlier, perfect classification of tissues is achieved, but not with high confidence. We identify and analyse a subset of genes from the ovarian dataset whose expression is highly differentiated between the types of tissues. To show robustness of the SVM method, two previously published datasets from other types of tissues or cells are analysed. The results are comparable to those previously obtained. We show that other machine learning methods also perform comparably to the SVM on many of those datasets.\n\n\nAVAILABILITY\nThe SVM software is available at http://www.cs. columbia.edu/ approximately bgrundy/svm.",
"title": ""
},
{
"docid": "46579940eac63ef355f8e79ef4358306",
"text": "In this paper socially intelligent agents (SIA) are understood as agents which do not only from an observer point of view behave socially but which are able to recognize and identify other agents and establish and maintain relationships to other agents. The process of building socially intelligent agents is innuenced by what the human as the designer considers`social', and conversely agent tools which are behaving socially can innuence human conceptions of sociality. A Cognitive Technology (CT) approach towards designing SIA aaords as an opportunity to study the process of 1) how new forms of interactions and func-tionalities and use of technology can emerge at the human-tool interface, 2) how social agents can constrain their cognitive and social potential, and 3) how social agent technology and human (social) cognition can co-evolve and co-adapt and result in new forms of sociality. Agent-human interaction requires a cognitive t between SIA technology and the human-in-the-loop as designer of, user of, and participant in social interactions. Aspects of human social psychology, e.g. story-telling, empathy, embodiment, historical and ecological grounding can contribute to a believable and cognitively well-balanced design of SIA technology, in order to further the relationship between humans and agent tools. It is hoped that approaches to believability based on these concepts can avoid thèshallowness' that merely take advantage of the anthromorphizing tendency in humans. This approach is put into the general framework of Embodied Artiicial Life (EAL) research. The paper concludes with a terminology and list of guidelines useful for SIA design.",
"title": ""
},
{
"docid": "fa1440ce586681326b18807e41e5465a",
"text": "Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target’s ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat.",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
},
{
"docid": "ac3f7a9557988101fb9e2eea0c1aa652",
"text": "Against the background of increasing awareness and appreciation of issues such as global warming and the impact of mankind's activities such as agriculture on the global environment, this paper updates previous assessments of some key environmental impacts that crop biotechnology has had on global agriculture. It focuses on the environmental impacts associated with changes in pesticide use and greenhouse gas emissions arising from the use of GM crops. The adoption of the technology has reduced pesticide spraying by 503 million kg (-8.8%) and, as a result, decreased the environmental impact associated with herbicide and insecticide use on these crops (as measured by the indicator the Environmental Impact Quotient [EIQ]) by 18.7%. The technology has also facilitated a significant reduction in the release of greenhouse gas emissions from this cropping area, which, in 2012, was equivalent to removing 11.88 million cars from the roads.",
"title": ""
},
{
"docid": "cba5c85ee9a9c4f97f99c1fcb35d0623",
"text": "Virtualized Cloud platforms have become increasingly common and the number of online services hosted on these platforms is also increasing rapidly. A key problem faced by providers in managing these services is detecting the performance anomalies and adjusting resources accordingly. As online services generate a very large amount of monitored data in the form of time series, it becomes very difficult to process this complex data by traditional approaches. In this work, we present a novel distributed parallel approach for performance anomaly detection. We build upon Holt-Winters forecasting for automatic aberrant behavior detection in time series. First, we extend the technique to work with MapReduce paradigm. Next, we correlate the anomalous metrics with the target Service Level Objective (SLO) in order to locate the suspicious metrics. We implemented and evaluated our approach on a production Cloud encompassing IaaS and PaaS service models. Experimental results confirm that our approach is efficient and effective in capturing the metrics causing performance anomalies in large time series datasets.",
"title": ""
},
{
"docid": "c702c4dbde96a024fac6fe4cbb052ce9",
"text": "Vehicular communications, referring to information exchange among vehicles, infrastructures, etc., have attracted a lot of attention recently due to great potential to support intelligent transportation, various safety applications, and on-road infotainment. In this paper, we provide a comprehensive overview of a recent research on enabling efficient and reliable vehicular communications from the network layer perspective. First, we introduce general applications and unique characteristics of vehicular communication networks and the corresponding classifications. Based on different driving patterns, we categorize vehicular networks into manual driving vehicular networks and automated driving vehicular networks, and then discuss the available communication techniques, network structures, routing protocols, and handoff strategies applied in these vehicular networks. Finally, we identify the challenges confronted by the current vehicular networks and present the corresponding research opportunities.",
"title": ""
},
{
"docid": "452c9eb3b5d411b1f32d6cf6a230b3e2",
"text": "The core vector machine (CVM) is a recent approach for scaling up kernel methods based on the notion of minimum enclosing ball (MEB). Though conceptually simple, an efficient implementation still requires a sophisticated numerical solver. In this paper, we introduce the enclosing ball (EB) problem where the ball's radius is fixed and thus does not have to be minimized. We develop efficient (1 + e)-approximation algorithms that are simple to implement and do not require any numerical solver. For the Gaussian kernel in particular, a suitable choice of this (fixed) radius is easy to determine, and the center obtained from the (1 + e)-approximation of this EB problem is close to the center of the corresponding MEB. Experimental results show that the proposed algorithm has accuracies comparable to the other large-scale SVM implementations, but can handle very large data sets and is even faster than the CVM in general.",
"title": ""
},
{
"docid": "becadf8b9d86457d9691e580b17366b5",
"text": "Failure of granular media under natural and laboratory loading conditions involves a variety of micromechanical processes producing several geometrically, kinematically, and texturally distinct types of structures. This paper provides a geological framework for failure processes as well as a mathematical model to analyze these processes. Of particular interest is the formation of tabular deformation bands in granular rocks, which could exhibit distinct localized deformation features including simple shearing, pure compaction/dilation, and various possible combinations thereof. The analysis is carried out using classical bifurcation theory combined with non-linear continuum mechanics and theoretical/computational plasticity. For granular media, yielding and plastic flow are known to be influenced by all three stress invariants, and thus we formulate a family of three-invariant plasticity models with a compression cap to capture the entire spectrum of yielding of geomaterials. We then utilize a return mapping algorithm in principal stress directions to integrate the stresses over discrete load increments, allowing the solution to find the critical bifurcation point for a given loading path. The formulation covers both the infinitesimal and finite deformation regimes, and comparisons are made of the localization criteria in the two regimes. In the accompanying paper, we demonstrate with numerical examples the role that the constitutive model and finite deformation effects play on the prediction of the onset of deformation bands in geomaterials. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "911d69eddd085d7642cdefbc658c821f",
"text": "The paper proposes an 8-bit AMOLED driver IC with a polynomial interpolation DAC. This architecture maintains high-accuracy AMOLED panels with 8-bit compensated gamma correction and supporting low-complex configuration which results in additional occupied die area. The proposed driver consists of a 6-bit gamma correction resistor-string DAC and a 2-bit polynomial interpolation current-modulation sub-DAC. The two-stage DAC leads to a compact die size compared with conventional 8-bit resister-string DAC, and the polynomial interpolation method provides high accurate grey level voltages than linear one. The AMOLED driver was realized in 0.35-μm CMOS process with DNL and INL of 0.43 LSB and 0.43 LSB.",
"title": ""
},
{
"docid": "43398874a34c7346f41ca7a18261e878",
"text": "This article investigates transitions at the level of societal functions (e.g., transport, communication, housing). Societal functions are fulfilled by sociotechnical systems, which consist of a cluster of aligned elements, e.g., artifacts, knowledge, markets, regulation, cultural meaning, infrastructure, maintenance networks and supply networks. Transitions are conceptualised as system innovations, i.e., a change from one sociotechnical system to another. The article describes a co-evolutionary multi-level perspective to understand how system innovations come about through the interplay between technology and society. The article makes a new step as it further refines the multi-level perspective by distinguishing characteristic patterns: (a) two transition routes, (b) fit–stretch pattern, and (c) patterns in breakthrough. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "38b8ce180e17f9a20189feeeb3d4410f",
"text": "In this paper, we present a Stochastic Scene Grammar (SSG) for parsing 2D indoor images into 3D scene layouts. Our grammar model integrates object functionality, 3D object geometry, and their 2D image appearance in a Function-Geometry-Appearance (FGA) hierarchy. In contrast to the prevailing approach in the literature which recognizes scenes and detects objects through appearance-based classification using machine learning techniques, our method takes a different perspective to scene understanding and recognizes objects and scenes by reasoning their functionality. Functionality is an essential property which often defines the categories of objects and scenes, and decides the design of geometry and scene layout. For example, a sofa is for people to sit comfortably, and a kitchen is a space for people to prepare food with various objects. Our SSG formulates object functionality and contextual relations between objects and imagined human poses in a joint probability distribution in the FGA hierarchy. The latter includes both functional concepts (the scene category, functional groups, functional objects, functional parts) and geometric entities (3D/2D/1D shape primitives). The decomposition of the grammar is terminated on the bottom-up detected lines and regions. We use a Markov chain Monte Carlo (MCMC) algorithm to optimize the Bayesian a posteriori probability and the output parse tree includes a 3D description of the 2D image in the FGA hierarchy. Experimental results on two Yibiao Zhao University of California, Los Angeles (UCLA), USA E-mail: ybzhao@ucla.edu www.yibiaozhao.com Song-Chun Zhu University of California, Los Angeles (UCLA), USA E-mail: sczhu@stat.ucla.edu http://www.stat.ucla.edu/~sczhu challenging indoor datasets demonstrate that the proposed approach not only significantly widens the scope of indoor scene parsing from traditional scene segmentation, labeling, and 3D reconstruction to functional object recognition, but also yields improved overall performance.",
"title": ""
},
{
"docid": "9cd82478c45179f354ab591bff44d59b",
"text": "License Plate Recognition (LPR) is a well known image processing technology. LPR system consists of four steps: capture the image from digital camera, pre-processing, character segmentation and character recognition. License plates are available in various styles and colors in various countries. Every country has their own license plate format. So each country develops the LPR system appropriate for the vehicle license plate format. Difficulties that the LPR systems face are the environmental and non-uniform outdoor illumination conditions. Therefore, most of the systems work under restricted environmental conditions like fixed illumination, limited vehicle speed, designated routes, and stationary backgrounds. Each LPR system use different combination of algorithms. From the papers being surveyed, it is realized that a good success rate of 93. 7% is obtained by the combination of fuzzy logic for license plate detection and Self Organizing (SO) neural network for character recognition. Comparisons of different LPR systems are discussed in this paper.",
"title": ""
},
{
"docid": "4d0f926c0b097f7b253db787e0c76b5c",
"text": "The processing and interpretation of pain signals is a complex process that entails excitation of peripheral nerves, local interactions within the spinal dorsal horn, and the activation of ascending and descending circuits that comprise a loop from the spinal cord to supraspinal structures and finally exciting nociceptive inputs at the spinal level. Although the \"circuits\" described here appear to be part of normal pain processing, the system demonstrates a remarkable ability to undergo neuroplastic transformations when nociceptive inputs are extended over time, and such adaptations function as a pronociceptive positive feedback loop. Manipulations directed to disrupt any of the nodes of this pain facilitatory loop may effectively disrupt the maintenance of the sensitized pain state and diminish or abolish neuropathic pain. Understanding the ascending and descending pain facilitatory circuits may provide for the design of rational therapies that do not interfere with normal sensory processing.",
"title": ""
},
{
"docid": "e3a766bad255bc3f4ad095cece45c637",
"text": "We introduce a new task called Multimodal Named Entity Recognition (MNER) for noisy user-generated data such as tweets or Snapchat captions, which comprise short text with accompanying images. These social media posts often come in inconsistent or incomplete syntax and lexical notations with very limited surrounding textual contexts, bringing significant challenges for NER. To this end, we create a new dataset for MNER called SnapCaptions (Snapchat image-caption pairs submitted to public and crowd-sourced stories with fully annotated named entities). We then build upon the state-of-the-art Bi-LSTM word/character based NER models with 1) a deep image network which incorporates relevant visual context to augment textual information, and 2) a generic modality-attention module which learns to attenuate irrelevant modalities while amplifying the most informative ones to extract contexts from, adaptive to each sample and token. The proposed MNER model with modality attention significantly outperforms the state-of-the-art text-only NER models by successfully leveraging provided visual contexts, opening up potential applications of MNER on myriads of social media platforms.",
"title": ""
},
{
"docid": "e322a4f6d36ccc561b6b793ef85db9c2",
"text": "Abdominal bracing is often adopted in fitness and sports conditioning programs. However, there is little information on how muscular activities during the task differ among the muscle groups located in the trunk and from those during other trunk exercises. The present study aimed to quantify muscular activity levels during abdominal bracing with respect to muscle- and exercise-related differences. Ten healthy young adult men performed five static (abdominal bracing, abdominal hollowing, prone, side, and supine plank) and five dynamic (V- sits, curl-ups, sit-ups, and back extensions on the floor and on a bench) exercises. Surface electromyogram (EMG) activities of the rectus abdominis (RA), external oblique (EO), internal oblique (IO), and erector spinae (ES) muscles were recorded in each of the exercises. The EMG data were normalized to those obtained during maximal voluntary contraction of each muscle (% EMGmax). The % EMGmax value during abdominal bracing was significantly higher in IO (60%) than in the other muscles (RA: 18%, EO: 27%, ES: 19%). The % EMGmax values for RA, EO, and ES were significantly lower in the abdominal bracing than in some of the other exercises such as V-sits and sit-ups for RA and EO and back extensions for ES muscle. However, the % EMGmax value for IO during the abdominal bracing was significantly higher than those in most of the other exercises including dynamic ones such as curl-ups and sit-ups. These results suggest that abdominal bracing is one of the most effective techniques for inducing a higher activation in deep abdominal muscles, such as IO muscle, even compared to dynamic exercises involving trunk flexion/extension movements. Key PointsTrunk muscle activities during abdominal bracing was examined with regard to muscle- and exercise-related differences.Abdominal bracing preferentially activates internal oblique muscles even compared to dynamic exercises involving trunk flexion/extension movements.Abdominal bracing should be included in exercise programs when the goal is to improve spine stability.",
"title": ""
},
{
"docid": "107cad2d86935768e9401495d2241b20",
"text": "A method is presented for using an extended Kalman filter with state noise compensation to estimate the trajectory, orientation, and slip variables for a small-scale robotic tracked vehicle. The principal goal of the method is to enable terrain property estimation. The methodology requires kinematic and dynamic models for skid-steering, as well as tractive force models parameterized by key soil parameters. Simulation studies initially used to verify the model basis are described, and results presented from application of the estimation method to both simulated and experimental study of a 60-kg robotic tracked vehicle. Preliminary results show the method can effectively estimate vehicle trajectory relying only on the model-based estimation and onboard sensor information. Estimates of slip on the left and right track as well as slip angle are essential for ongoing work in vehicle-based soil parameter estimation. The favorable comparison against motion capture data suggests this approach will be useful for laboratory and field-based application.",
"title": ""
},
{
"docid": "908716e7683bdc78283600f63bd3a1b0",
"text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "9e3a7af7b8773f43ba32d30f3610af40",
"text": "Several attempts to enhance statistical parametric speech synthesis have contemplated deep-learning-based postfil-ters, which learn to perform a mapping of the synthetic speech parameters to the natural ones, reducing the gap between them. In this paper, we introduce a new pre-training approach for neural networks, applied in LSTM-based postfilters for speech synthesis, with the objective of enhancing the quality of the synthesized speech in a more efficient manner. Our approach begins with an auto-regressive training of one LSTM network, whose is used as an initialization for postfilters based on a denoising autoencoder architecture. We show the advantages of this initialization on a set of multi-stream postfilters, which encompass a collection of denoising autoencoders for the set of MFCC and fundamental frequency parameters of the artificial voice. Results show that the initialization succeeds in lowering the training time of the LSTM networks and achieves better results in enhancing the statistical parametric speech in most cases, when compared to the common random-initialized approach of the networks.",
"title": ""
}
] |
scidocsrr
|
5f6f4d48f6d527a6069050230a5c30b1
|
The Fixed-Size Ordinally-Forgetting Encoding Method for Neural Network Language Models
|
[
{
"docid": "1d956bafdb6b7d4aa2afcfeb77ac8cbb",
"text": "In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "8cc42ad71caac7605648166f9049df8e",
"text": "This section considers the application of eye movements to user interfaces—both for analyzing interfaces, measuring usability, and gaining insight into human performance—and as an actual control medium within a human-computer dialogue. The two areas have generally been reported separately; but this book seeks to tie them together. For usability analysis, the user’s eye movements while using the system are recorded and later analyzed retrospectively, but the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user-computer dialogue. They might be the sole input, typically for disabled users or hands-busy applications, or they might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices.",
"title": ""
},
{
"docid": "a6c39c728d2338e8eb6bc7b255952cea",
"text": "Clustering methods need to be robust if they are to be useful in practice. In this paper, we analyze several popular robust clustering methods and show that they have much in common. We also establish a connection between fuzzy set theory and robust statistics and point out the similarities between robust clustering methods and statistical methods such as the weighted least-squares (LS) technique, the M estimator, the minimum volume ellipsoid (MVE) algorithm, cooperative robust estimation (CRE), minimization of probability of randomness (MINPRAN), and the epsilon contamination model. By gleaning the common principles upon which the methods proposed in the literature are based, we arrive at a unified view of robust clustering methods. We define several general concepts that are useful in robust clustering, state the robust clustering problem in terms of the defined concepts, and propose generic algorithms and guidelines for clustering noisy data. We also discuss why the generalized Hough transform is a suboptimal solution to the robust clustering problem.",
"title": ""
},
{
"docid": "96b4e076448b9db96eae08620fdac98c",
"text": "Incident Response has always been an important aspect of Information Security but it is often overlooked by security administrators. Responding to an incident is not solely a technical issue but has many management, legal, technical and social aspects that are presented in this paper. We propose a detailed management framework along with a complete structured methodology that contains best practices and recommendations for appropriately handling a security incident. We also present the state-of-the art technology in computer, network and software forensics as well as automated trace-back artifacts, schemas and protocols. Finally, we propose a generic Incident Response process within a corporate environment. © 2005 Elsevier Science. All rights reserved",
"title": ""
},
{
"docid": "eed6db13b57d9e510c22b4a95936ea5b",
"text": "Today data mining is widely used by companies with a strong consumer focus like retail, financial, communication and marketing organizations. Here technically data mining is the process of extraction of required information from huge databases. It allows users to analyze data from many different dimensions or angles, categorize it and summarize the relationships identified. The ultimate goal of this paper is to propose a methodology for the improvement in DB-SCAN algorithm to improve clustering accuracy. The proposed improvement is based on back propagation algorithm to calculate Euclidean distance in the dynamic manner. Also this paper shows the obtained results of implemented proposed and existing methods and it compares the results in terms of its execution time and accuracy.",
"title": ""
},
{
"docid": "868fb7175b170801a174df2263ca6063",
"text": "Vortex induced vibrations of bluff bodies occur when the vortex shedding frequency is close to the natural frequency of the structure. Of interest is the prediction of the lift and drag forces on the structure given some limited and scattered information on the velocity field. This is an inverse problem that is not straightforward to solve using standard computational fluid dynamics (CFD) methods, especially since no information is provided for the pressure. An even greater challenge is to infer the lift and drag forces given some dye or smoke visualizations of the flow field. Here we employ deep neural networks that are extended to encode the incompressible NavierStokes equations coupled with the structure’s dynamic motion equation. In the first case, given scattered data in space-time on the velocity field and the structure’s motion, we use four coupled deep neural networks to infer very accurately the structural parameters, the entire time-dependent pressure field (with no prior training data), and reconstruct the velocity vector field and the structure’s dynamic motion. In the second case, given scattered data in space-time on a concentration field only, we use five coupled deep neural networks to infer very accurately the vector velocity field and all other quantities of interest as before. This new paradigm of inference in fluid mechanics for coupled multi-physics problems enables velocity and pressure quantification from flow snapshots in small subdomains and can be exploited for flow control applications and also for system identification.",
"title": ""
},
{
"docid": "efa07fe0dc380134681f10bf646130a6",
"text": "This paper describes a compact robot with two magnetic wheels in a bicycle arrangement, which is intended for inspecting the inner casing of pipes with complex shaped structures. The locomotion concept is based on an adapted magnetic wheel unit integrating two lateral lever arms. These arms allow for slightly lifting off the wheel in order to locally decrease the magnetic force, as well as laterally stabilizing the wheel unit. The robot has the main advantage to be compact and mechanically simple. It features 5 active degrees of freedom: 2 driven wheels each equipped with an active lifter-stabilizer and 1 steering unit. This paper also presents the design and implementation of a prototype robot and its high mobility is shown. It is able to pass 90deg convex and concave obstacles with any inclination regarding the gravity. Finally, it only requires limited space to maneuver, since turning on spot around the rear wheel is possible.",
"title": ""
},
{
"docid": "49aa556fa64cf5cc9e524cbd4b27d426",
"text": "In this paper, we focus on tackling the problem of automatic accurate localization of detected objects in high-resolution remote sensing images. The two major problems for object localization in remote sensing images caused by the complex context information such images contain are achieving generalizability of the features used to describe objects and achieving accurate object locations. To address these challenges, we propose a new object localization framework, which can be divided into three processes: region proposal, classification, and accurate object localization process. First, a region proposal method is used to generate candidate regions with the aim of detecting all objects of interest within these images. Then, generic image features from a local image corresponding to each region proposal are extracted by a combination model of 2-D reduction convolutional neural networks (CNNs). Finally, to improve the location accuracy, we propose an unsupervised score-based bounding box regression (USB-BBR) algorithm, combined with a nonmaximum suppression algorithm to optimize the bounding boxes of regions that detected as objects. Experiments show that the dimension-reduction model performs better than the retrained and fine-tuned models and the detection precision of the combined CNN model is much higher than that of any single model. Also our proposed USB-BBR algorithm can more accurately locate objects within an image. Compared with traditional features extraction methods, such as elliptic Fourier transform-based histogram of oriented gradients and local binary pattern histogram Fourier, our proposed localization framework shows robustness when dealing with different complex backgrounds.",
"title": ""
},
{
"docid": "2258a0ba739557d489a796f050fad3e0",
"text": "The term fractional calculus is more than 300 years old. It is a generalization of the ordinary differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and goes back to times when Leibniz, Gauss, and Newton invented this kind of calculation. In a letter to L’Hospital in 1695 Leibniz raised the following question (Miller and Ross, 1993): “Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders?\" The story goes that L’Hospital was somewhat curious about that question and replied by another question to Leibniz. “What if the order will be 1/2?\" Leibniz in a letter dated September 30, 1695 replied: “It will lead to a paradox, from which one day useful consequences will be drawn.\" The question raised by Leibniz for a fractional derivative was an ongoing topic in the last 300 years. Several mathematicians contributed to this subject over the years. People like Liouville, Riemann, and Weyl made major contributions to the theory of fractional calculus. The story of the fractional calculus continued with contributions from Fourier, Abel, Leibniz, Grünwald, and Letnikov. Nowadays, the fractional calculus attracts many scientists and engineers. There are several applications of this mathematical phenomenon in mechanics, physics, chemistry, control theory and so on (Caponetto et al., 2010; Magin, 2006; Monje et al., 2010; Oldham and Spanier, 1974; Oustaloup, 1995; Podlubny, 1999). It is natural that many authors tried to solve the fractional derivatives, fractional integrals and fractional differential equations in Matlab. A few very good and interesting Matlab functions were already submitted to the MathWorks, Inc. Matlab Central File Exchange, where they are freely downloadable for sharing among the users. In this chapter we will use some of them. It is worth mentioning some addition to Matlab toolboxes, which are appropriate for the solution of fractional calculus problems. One of them is a toolbox created by CRONE team (CRONE, 2010) and another one is the Fractional State–Space Toolkit developed by Dominik Sierociuk (Sierociuk, 2005). Last but not least we should also mention a Matlab toolbox created by Dingyü Xue (Xue, 2010), which is based on Matlab object for fractional-order transfer function and some manipulation with this class of the transfer function. Despite that the mentioned toolboxes are mainly for control systems, they can be “abused\" for solutions of general problems related to fractional calculus as well. 10",
"title": ""
},
{
"docid": "2a7dce77aaff56b810f4a80c32dc80ea",
"text": "Automatically segmenting and classifying clinical free text into sections is an important first step to automatic information retrieval, information extraction and data mining tasks, as it helps to ground the significance of the text within. In this work we describe our approach to automatic section segmentation of clinical records such as hospital discharge summaries and radiology reports, along with section classification into pre-defined section categories. We apply machine learning to the problems of section segmentation and section classification, comparing a joint (one-step) and a pipeline (two-step) approach. We demonstrate that our systems perform well when tested on three data sets, two for hospital discharge summaries and one for radiology reports. We then show the usefulness of section information by incorporating it in the task of extracting comorbidities from discharge summaries.",
"title": ""
},
{
"docid": "a2130c0316eea0fa510f381ea312b65e",
"text": "A technique for building consistent 3D reconstructions from many views based on fitting a low rank matrix to a matrix with missing data is presented. Rank-four submatrices of minimal, or slightly larger, size are sampled and spans of their columns are combined to constrain a basis of the fitted matrix. The error minimized is expressed in terms of the original subspaces which leads to a better resistance to noise compared to previous methods. More than 90% of the missing data can be handled while finding an acceptable solution efficiently. Applications to 3D reconstruction using both affine and perspective camera models are shown. For the perspective model, a new linear method based on logarithms of positive depths from chirality is introduced to make the depths consistent with an overdetermined set of epipolar geometries. Results are shown for scenes and sequences of various types. Many images in open and closed sequences in narrow and wide base-line setups are reconstructed with reprojection errors around one pixel. It is shown that reconstructed cameras can be used to obtain dense reconstructions from epipolarly aligned images.",
"title": ""
},
{
"docid": "ec0ef1585583c2729e256149898be906",
"text": "Over the last two decades, the organizational environment of Higher Education Institutions (HEI) in many countries, has fundamentally changed. Student numbers have continuously increased since the 1980s and transformed Higher Education (HE) from an exclusive offering for a small elite to a mass product. Consequently, universities had to increasingly deal with operations management issues such as capacity planning and efficiency. In order to enable this expansion and as means to facilitate competition, the funding structure of HEI's has changed. Greater reliance on tuition fees and industryfunded research exposed universities to the forces of the market. All in all, growth, commercialization and competition have transformed HEI's from publicly funded cosy elite institutions to large professional service operations with more demanding customers. Consequently, they increasingly look at private sector management practices to deal with the rising performance pressure. During the last two decades, Lean Management has received the reputation as a reliable method for achieving performance improvements by delivering higher quality at lower costs. From its origins in manufacturing, Lean has spread first to the service sector and is now successfully adopted by an increasing number of public sector organizations. Paradoxically, the enthusiasm for Lean in HE has so far been limited. A conceptual framework for applying Lean Management methodology in HEI's is presented in this paper.",
"title": ""
},
{
"docid": "9b16eaa154370895b446cc4e66c9a8a9",
"text": "The 15 kV SiC N-IGBT is the state-of-the-art high voltage power semiconductor device developed by Cree. The SiC IGBT is exposed to a peak stress of 10-11 kV in power converter systems, with punch-through turn-on dv/dt over 100 kV/μs and turn-off dv/dt about 35 kV/μs. Such high dv/dt requires ultralow coupling capacitance in the dc-dc isolation stage of the gate driver for maintaining fidelity of the signals on the control-supply ground side. Accelerated aging of the insulation in the isolation stage is another serious concern. In this paper, a simple transformer based isolation with a toroid core is investigated for the above requirements of the 15 kV IGBT. The gate driver prototype has been developed with over 100 kV dc insulation capability, and its inter-winding coupling capacitance has been found to be 3.4 pF and 13 pF at 50 MHz and 100 MHz respectively. The performance of the gate driver prototype has been evaluated up to the above mentioned specification using double-pulse tests on high-side IGBT in a half-bridge configuration. The continuous testing at 5 kHz has been performed till 8 kV, and turn-on dv/dt of 85 kV/μs on a buck-boost converter. The corresponding experimental results are presented. Also, the test methodology of evaluating the gate driver at such high voltage, without a high voltage power supply is discussed. Finally, experimental results validating fidelity of the signals on the control-ground side are provided to show the influence of increased inter-winding coupling capacitance on the performance of the gate driver.",
"title": ""
},
{
"docid": "c4525bcf7db5540a389b79330061eca6",
"text": "This work addresses design and implementation issues of a 24 GHz rectenna, which is developed to demonstrate the feasibility of wireless power harvesting and transmission (WPT) techniques towards millimeter-wave regime. The proposed structure includes a compact circularly polarized substrate integrated waveguide (SIW) cavity-backed antenna array integrated with a self-biased rectifier using commercial Schottky diodes. The antenna and the rectifier are individually designed, optimized, fabricated and measured. Then they are integrated into one circuit in order to validate the studied rectenna architecture. The maximum measured conversion efficiency and DC voltage are respectively equal to 24% and 0.6 V for an input power density of 10 mW/cm2.",
"title": ""
},
{
"docid": "54b6c687262c5d051529e5ed2d2bf8a1",
"text": "INTRODUCTION\nThe chick embryo is an emerging in vivo model in several areas of pre-clinical research including radiopharmaceutical sciences. Herein, it was evaluated as a potential test system for assessing the biodistribution and in vivo stability of radiopharmaceuticals. For this purpose, a number of radiopharmaceuticals labeled with (18)F, (125)I, (99m)Tc, and (177)Lu were investigated in the chick embryo and compared with the data obtained in mice.\n\n\nMETHODS\nChick embryos were cultivated ex ovo for 17-19 days before application of the radiopharmaceutical directly into the peritoneum or intravenously using a vein of the chorioallantoic membrane (CAM). At a defined time point after application of radioactivity, the embryos were euthanized by shock-freezing using liquid nitrogen. Afterwards they were separated from residual egg components for post mortem imaging purposes using positron emission tomography (PET) or single photon emission computed tomography (SPECT).\n\n\nRESULTS\nSPECT images revealed uptake of [(99m)Tc]pertechnetate and [(125)I]iodide in the thyroid of chick embryos and mice, whereas [(177)Lu]lutetium, [(18)F]fluoride and [(99m)Tc]-methylene diphosphonate ([(99m)Tc]-MDP) were accumulated in the bones. [(99m)Tc]-dimercaptosuccinic acid ((99m)Tc-DMSA) and the somatostatin analog [(177)Lu]-DOTATOC, as well as the folic acid derivative [(177)Lu]-DOTA-folate showed accumulation in the renal tissue whereas [(99m)Tc]-mebrofenin accumulated in the gall bladder and intestine of both species. In vivo dehalogenation of [(18)F]fallypride and of the folic acid derivative [(125)I]iodo-tyrosine-folate was observed in both species. In contrast, the 3'-aza-2'-[(18)F]fluorofolic acid ([(18)F]-AzaFol) was stable in the chick embryo as well as in the mouse.\n\n\nCONCLUSIONS\nOur results revealed the same tissue distribution profile and in vivo stability of radiopharmaceuticals in the chick embryo and the mouse. This observation is promising with regard to a potential use of the chick embryo as an inexpensive and simple test model for preclinical screening of novel radiopharmaceuticals.",
"title": ""
},
{
"docid": "8e082f030aa5c5372fe327d4291f1864",
"text": "The Internet of Things (IoT) describes the interconnection of objects (or Things) for various purposes including identification, communication, sensing, and data collection. “Things” in this context range from traditional computing devices like Personal Computers (PC) to general household objects embedded with capabilities for sensing and/or communication through the use of technologies such as Radio Frequency Identification (RFID). This conceptual paper, from a philosophical viewpoint, introduces an initial set of guiding principles also referred to in the paper as commandments that can be applied by all the stakeholders involved in the IoT during its introduction, deployment and thereafter. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]",
"title": ""
},
{
"docid": "8ab4f34c736742a153477f919dfb4d8f",
"text": "In this paper, we model the trajectory of sea vessels and provide a service that predicts in near-real time the position of any given vessel in 4’, 10’, 20’ and 40’ time intervals. We explore the necessary tradeoffs between accuracy, performance and resource utilization are explored given the large volume and update rates of input data. We start with building models based on well-established machine learning algorithms using static datasets and multi-scan training approaches and identify the best candidate to be used in implementing a single-pass predictive approach, under real-time constraints. The results are measured in terms of accuracy and performance and are compared against the baseline kinematic equations. Results show that it is possible to efficiently model the trajectory of multiple vessels using a single model, which is trained and evaluated using an adequately large, static dataset, thus achieving a significant gain in terms of resource usage while not compromising accuracy.",
"title": ""
},
{
"docid": "1be35b9562a428a7581541559dc16bd8",
"text": "OBJECTIVE\nTo assess the effect of virtual reality training on an actual laparoscopic operation.\n\n\nDESIGN\nProspective randomised controlled and blinded trial.\n\n\nSETTING\nSeven gynaecological departments in the Zeeland region of Denmark.\n\n\nPARTICIPANTS\n24 first and second year registrars specialising in gynaecology and obstetrics.\n\n\nINTERVENTIONS\nProficiency based virtual reality simulator training in laparoscopic salpingectomy and standard clinical education (controls).\n\n\nMAIN OUTCOME MEASURE\nThe main outcome measure was technical performance assessed by two independent observers blinded to trainee and training status using a previously validated general and task specific rating scale. The secondary outcome measure was operation time in minutes.\n\n\nRESULTS\nThe simulator trained group (n=11) reached a median total score of 33 points (interquartile range 32-36 points), equivalent to the experience gained after 20-50 laparoscopic procedures, whereas the control group (n=10) reached a median total score of 23 (22-27) points, equivalent to the experience gained from fewer than five procedures (P<0.001). The median total operation time in the simulator trained group was 12 minutes (interquartile range 10-14 minutes) and in the control group was 24 (20-29) minutes (P<0.001). The observers' inter-rater agreement was 0.79.\n\n\nCONCLUSION\nSkills in laparoscopic surgery can be increased in a clinically relevant manner using proficiency based virtual reality simulator training. The performance level of novices was increased to that of intermediately experienced laparoscopists and operation time was halved. Simulator training should be considered before trainees carry out laparoscopic procedures.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT00311792.",
"title": ""
},
{
"docid": "61a2c34fde77cdebd3896ebd50476c63",
"text": "The fractional Fourier transform (FRFT), which is a generalization of the classical Fourier transform, was introduced a number of years ago in the mathematics literature but appears to have remained largely unknown to the signal processing community, to which it may, however, be potentially useful. The FRFT depends on a parameter cy and can be interpreted as a rotation by an angle a in the time-frequency plane. An FRFT with a = n-/2 corresponds to the classical Fourier transform, and an FRFT with Q = 0 corresponds to the identity operator. On the other hand, the angles of successively performed FRFT’s simply add up, as do the angles of successive rotations. The FRFT of a signal can also be interpreted as a decomposition of the signal in terms of chirps. In this paper, we briefly introduce the FRFT and a number of its properties and then present some new results: the interpretation as a rotation in the time-frequency plane, and the FRFT’s relationships with time-frequency representations such as the Wigner distribution, the ambiguity function, the shorttime Fourier transform and the spectrogram. These relationships have a very simple and natural form and support the FRFT’s interpretation as a rotation operator. Examples of FRFT’s of some simple signals are given. An example of the application of the FRFT is also given.",
"title": ""
},
{
"docid": "8f0ac7417daf0c995263274738dcbb13",
"text": "Technology platform strategies offer a novel way to orchestrate a rich portfolio of contributions made by the many independent actors who form an ecosystem of heterogeneous complementors around a stable platform core. This form of organising has been successfully used in the smartphone, gaming, commercial software, and other industrial sectors. While technology ecosystems require stability and homogeneity to leverage common investments in standard components, they also need variability and heterogeneity to meet evolving market demand. Although the required balance between stability and evolvability in the ecosystem has been addressed conceptually in the literature, we have less understanding of its underlying mechanics or appropriate governance. Through an extensive case study of a business software ecosystem consisting of a major multinational manufacturer of enterprise resource planning (ERP) software at the core, and a heterogeneous system of independent implementation partners and solution developers on the periphery, our research identifies three salient tensions that characterize the ecosystem: standard-variety; control-autonomy; and collective-individual. We then highlight the specific ecosystem governance mechanisms designed to simultaneously manage desirable and undesirable variance across each tension. Paradoxical tensions may manifest as dualisms, where actors are faced with contradictory and disabling „either/or‟ decisions. Alternatively, they may manifest as dualities, where tensions are framed as complementary and mutually-enabling. We identify conditions where latent, mutually enabling tensions become manifest as salient, disabling tensions. By identifying conditions in which complementary logics are overshadowed by contradictory logics, our study further contributes to the understanding of the dynamics of technology ecosystems, as well as the effective design of technology ecosystem governance that can explicitly embrace paradoxical tensions towards generative outcomes.",
"title": ""
},
{
"docid": "a3b4e8b4a54921da210b42e43fc2e7ff",
"text": "CONTEXT\nRecent reports show that obesity and diabetes have increased in the United States in the past decade.\n\n\nOBJECTIVE\nTo estimate the prevalence of obesity, diabetes, and use of weight control strategies among US adults in 2000.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThe Behavioral Risk Factor Surveillance System, a random-digit telephone survey conducted in all states in 2000, with 184 450 adults aged 18 years or older.\n\n\nMAIN OUTCOME MEASURES\nBody mass index (BMI), calculated from self-reported weight and height; self-reported diabetes; prevalence of weight loss or maintenance attempts; and weight control strategies used.\n\n\nRESULTS\nIn 2000, the prevalence of obesity (BMI >/=30 kg/m(2)) was 19.8%, the prevalence of diabetes was 7.3%, and the prevalence of both combined was 2.9%. Mississippi had the highest rates of obesity (24.3%) and of diabetes (8.8%); Colorado had the lowest rate of obesity (13.8%); and Alaska had the lowest rate of diabetes (4.4%). Twenty-seven percent of US adults did not engage in any physical activity, and another 28.2% were not regularly active. Only 24.4% of US adults consumed fruits and vegetables 5 or more times daily. Among obese participants who had had a routine checkup during the past year, 42.8% had been advised by a health care professional to lose weight. Among participants trying to lose or maintain weight, 17.5% were following recommendations to eat fewer calories and increase physical activity to more than 150 min/wk.\n\n\nCONCLUSIONS\nThe prevalence of obesity and diabetes continues to increase among US adults. Interventions are needed to improve physical activity and diet in communities nationwide.",
"title": ""
}
] |
scidocsrr
|
9299a8cf1708072bc8f5a59f35361a16
|
Measuring thin-client performance using slow-motion benchmarking
|
[
{
"docid": "014f1369be6a57fb9f6e2f642b3a4926",
"text": "VNC is platform-independent – a VNC viewer on one operating system may connect to a VNC server on the same or any other operating system. There are clients and servers for many GUIbased operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.",
"title": ""
}
] |
[
{
"docid": "a48ada0e9d835f26a484d90c62ffc4cf",
"text": "Plastics have become an important part of modern life and are used in different sectors of applications like packaging, building materials, consumer products and much more. Each year about 100 million tons of plastics are produced worldwide. Demand for plastics in India reached about 4.3 million tons in the year 2001-02 and would increase to about 8 million tons in the year 2006-07. Degradation is defined as reduction in the molecular weight of the polymer. The Degradation types are (a).Chain end degradation/de-polymerization (b).Random degradation/reverse of the poly condensation process. Biodegradation is defined as reduction in the molecular weight by naturally occurring microorganisms such as bacteria, fungi, and actinomycetes. That is involved in the degradation of both natural and synthetic plastics. Examples of Standard Testing for Polymer Biodegradability in Various Environments. ASTM D5338: Standard Test Method for Determining the Aerobic Biodegradation of Plastic Materials under Controlled Composting Conditions, ASTM D5210: Standard Test Method for Determining the Anaerobic Biodegradation of Plastic Materials in the Presence of Municipal Sewage Sludge, ASTM D5526: Standard Test Method for Determining Anaerobic Biodegradation of Plastic Materials under Accelerated Landfill Conditions, ASTM D5437: Standard Practice for Weathering of Plastics under Marine Floating Exposure. Plastics are biodegraded, (1).In wild nature by aerobic conditions CO2, water are produced,(2).In sediments & landfills by anaerobic conditions CO2, water, methane are produced, (3).In composts and soil by partial aerobic & anaerobic conditions. This review looks at the technological advancement made in the development of more easily biodegradable plastics and the biodegradation of conventional plastics by microorganisms. Additives, such as pro-oxidants and starch, are applied in synthetic materials to modify and make plastics biodegradable. Reviewing published and ongoing studies on plastic biodegradation, this paper attempts to make conclusions on potentially viable methods to reduce impacts of plastic waste on the",
"title": ""
},
{
"docid": "6c11b5d9ec8a89f843b08fe998de194c",
"text": "As large-scale multimodal data are ubiquitous in many real-world applications, learning multimodal representations for efficient retrieval is a fundamental problem. Most existing methods adopt shallow structures to perform multimodal representation learning. Due to a limitation of learning ability of shallow structures, they fail to capture the correlation of multiple modalities. Recently, multimodal deep learning was proposed and had proven its superiority in representing multimodal data due to its high nonlinearity. However, in order to learn compact and accurate representations, how to reduce the redundant information lying in the multimodal representations and incorporate different complexities of different modalities in the deep models is still an open problem. In order to address the aforementioned problem, in this paper we propose a hashing-based orthogonal deep model to learn accurate and compact multimodal representations. The method can better capture the intra-modality and inter-modality correlations to learn accurate representations. Meanwhile, in order to make the representations compact, the hashing-based model can generate compact hash codes and the proposed orthogonal structure can reduce the redundant information lying in the codes by imposing orthogonal regularizer on the weighting matrices. We also theoretically prove that, in this case, the learned codes are guaranteed to be approximately orthogonal. Moreover, considering the different characteristics of different modalities, effective representations can be attained with different number of layers for different modalities. Comprehensive experiments on three real-world datasets demonstrate a substantial gain of our method on retrieval tasks compared with existing algorithms.",
"title": ""
},
{
"docid": "ce08ae4dd55bb290900f49010e219513",
"text": "BACKGROUND\nCurrent antipsychotics have only a limited effect on 2 core aspects of schizophrenia: negative symptoms and cognitive deficits. Minocycline is a second-generation tetracycline that has a beneficial effect in various neurologic disorders. Recent findings in animal models and human case reports suggest its potential for the treatment of schizophrenia. These findings may be linked to the effect of minocycline on the glutamatergic system, through inhibition of nitric oxide synthase and blocking of nitric oxide-induced neurotoxicity. Other proposed mechanisms of action include effects of minocycline on the dopaminergic system and its inhibition of microglial activation.\n\n\nOBJECTIVE\nTo examine the efficacy of minocycline as an add-on treatment for alleviating negative and cognitive symptoms in early-phase schizophrenia.\n\n\nMETHOD\nA longitudinal double-blind, randomized, placebo-controlled design was used, and patients were followed for 6 months from August 2003 to March 2007. Seventy early-phase schizophrenia patients (according to DSM-IV) were recruited and 54 were randomly allocated in a 2:1 ratio to minocycline 200 mg/d. All patients had been initiated on treatment with an atypical antipsychotic < or = 14 days prior to study entry (risperidone, olanzapine, quetiapine, or clozapine; 200-600 mg/d chlorpromazine-equivalent doses). Clinical, cognitive, and functional assessments were conducted, with the Scale for the Assessment of Negative Symptoms (SANS) as the primary outcome measure.\n\n\nRESULTS\nMinocycline was well tolerated, with few adverse events. It showed a beneficial effect on negative symptoms and general outcome (evident in SANS, Clinical Global Impressions scale). A similar pattern was found for cognitive functioning, mainly in executive functions (working memory, cognitive shifting, and cognitive planning).\n\n\nCONCLUSIONS\nMinocycline treatment was associated with improvement in negative symptoms and executive functioning, both related to frontal-lobe activity. Overall, the findings support the beneficial effect of minocycline add-on therapy in early-phase schizophrenia.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00733057.",
"title": ""
},
{
"docid": "7c1af982b6ac6aa6df4549bd16c1964c",
"text": "This paper deals with the problem of estimating the position of emitters using only direction of arrival information. We propose an improvement of newly developed algorithm for position finding of a stationary emitter called sensitivity analysis. The proposed method uses Taylor series expansion iteratively to enhance the estimation of the emitter location and reduce position finding error. Simulation results show that our proposed method makes a great improvement on accuracy of position finding with respect to sensitivity analysis method.",
"title": ""
},
{
"docid": "f334f49a1e21e3278c25ca0d63b2ef8a",
"text": "We show that if (J,,} is a sequence of uniformly LI-bounded functions on a measure space, and if.f, -fpointwise a.e., then lim,,_(I{lf,, 1 -IIf,, fII) If I,' for all 0 < p < oc. This result is also generalized in Theorem 2 to some functionals other than the L P norm, namely I. /( J,, -(f, f) f ) -1 0 for suitablej: C -C and a suitable sequence (fJ}. A brief discussion is given of the usefulness of this result in variational problems.",
"title": ""
},
{
"docid": "b3bd600c56bf65171cfc6c2d62cfb665",
"text": "GaN is now providing solid-state power amplifiers of higher efficiency, bandwidth, and power density than could be achieved only a few years ago. Novel circuit topologies combined with GaN's high-voltage capabilities and linearization are allowing GaN high-power amplifiers to simultaneously achieve both linearity and record high efficiency. GaN high-power amplifiers have been produced with more than 100 W of power over multioctave bandwidths and with PAEs of more than 60%. Narrower-band high-power amplifiers have been produced with PAEs of more than 90%.",
"title": ""
},
{
"docid": "0811f0768e8112b40bbcd38625db2526",
"text": "The Alfred Mann Foundation is completing development of a coordinated network of BION/spl reg/ microstimulator/sensor (hereinafter implant) that has broad stimulating, sensing and communication capabilities. The network consists of a master control unit (MCU) in communication with a group of BION implants. Each implant is powered by a custom lithium-ion rechargeable 10 mW-hr battery. The charging, discharging, safety, stimulating, sensing, and communication circuits are designed to be highly efficient to minimize energy use and maximize battery life and time between charges. The stimulator can be programmed to deliver pulses in any value in the following range: 5 /spl mu/A to 20 mA in 3.3% constant current steps, 7 /spl mu/s to 2000 /spl mu/s in 7 /spl mu/s pulse width steps, and 1 to 4000 Hz in frequency. The preamp voltage sensor covers the range 10 /spl mu/V to 1.0 V with bandpass filtering and several forms of data analysis. The implant also contains sensors that can read out pressure, temperature, DC magnetic field, and distance (via a low frequency magnetic field) up to 20 cm between any two BION implants. The MCU contains a microprocessor, user interface, two-way communication system, and a rechargeable battery. The MCU can command and interrogate in excess of 800 BlON implants every 10 ms, i.e., 100 times a second.",
"title": ""
},
{
"docid": "8e53336bb4d216d78a6ab79faacb48fc",
"text": "Pattern glare is characterised by symptoms of visual perceptual distortions and visual stress on viewing striped patterns. People with migraine or Meares-Irlen syndrome (visual stress) are especially prone to pattern glare. The literature on pattern glare is reviewed, and the goal of this study was to develop clinical norms for the Wilkins and Evans Pattern Glare Test. This comprises three test plates of square wave patterns of spatial frequency 0.5, 3 and 12 cycles per degree (cpd). Patients are shown the 0.5 cpd grating and the number of distortions that are reported in response to a list of questions is recorded. This is repeated for the other patterns. People who are prone to pattern glare experience visual perceptual distortions on viewing the 3 cpd grating, and pattern glare can be quantified as either the sum of distortions reported with the 3 cpd pattern or as the difference between the number of distortions with the 3 and 12 cpd gratings, the '3-12 cpd difference'. In study 1, 100 patients consulting an optometrist performed the Pattern Glare Test and the 95th percentile of responses was calculated as the limit of the normal range. The normal range for the number of distortions was found to be <4 on the 3 cpd grating and <2 for the 3-12 cpd difference. Pattern glare was similar in both genders but decreased with age. In study 2, 30 additional participants were given the test in the reverse of the usual testing order and were compared with a sub-group from study 1, matched for age and gender. Participants experienced more distortions with the 12 cpd grating if it was presented after the 3 cpd grating. However, the order did not influence the two key measures of pattern glare. In study 3, 30 further participants who reported a medical diagnosis of migraine were compared with a sub-group of the participants in study 1 who did not report migraine or frequent headaches, matched for age and gender. The migraine group reported more symptoms on viewing all gratings, particularly the 3 cpd grating. The only variable to be significantly different between the groups was the 3-12 cpd difference. In conclusion, people have an abnormal degree of pattern glare if they have a Pattern Glare Test score of >3 on the 3 cpd grating or a score of >1 on the 3-12 cpd difference. The literature suggests that these people are likely to have visual stress in everyday life and may therefore benefit from interventions designed to alleviate visual stress, such as precision tinted lenses.",
"title": ""
},
{
"docid": "8eb6c74d678235a6fd4df755a133115e",
"text": "We have demonstrated a 70-nm n-channel tunneling field-effect transistor (TFET) which has a subthreshold swing (SS) of 52.8 mV/dec at room temperature. It is the first experimental result that shows a sub-60-mV/dec SS in the silicon-based TFETs. Based on simulation results, the gate oxide and silicon-on-insulator layer thicknesses were scaled down to 2 and 70 nm, respectively. However, the ON/ OFF current ratio of the TFET was still lower than that of the MOSFET. In order to increase the on current further, the following approaches can be considered: reduction of effective gate oxide thickness, increase in the steepness of the gradient of the source to channel doping profile, and utilization of a lower bandgap channel material",
"title": ""
},
{
"docid": "787979d6c1786f110ff7a47f09b82907",
"text": "Imbalance settlement markets are managed by the system operators and provide a mechanism for settling the inevitable discrepancies between contractual agreements and physical delivery. In European power markets, settlements schemes are mainly based on heuristic penalties. These arrangements have disadvantages: First, they do not provide transparency about the cost of the reserve capacity that the system operator may have obtained ahead of time, nor about the cost of the balancing energy that is actually deployed. Second, they can be gamed if market participants use the imbalance settlement as an opportunity for market arbitrage, for example if market participants use balancing energy to avoid higher costs through regular trade on illiquid energy markets. Third, current practice hinders the market-based integration of renewable energy and the provision of financial incentives for demand response through rigid penalty rules. In this paper we try to remedy these disadvantages by proposing an imbalance settlement procedure with an incentive compatible cost allocation scheme for reserve capacity and deployed energy. Incentive compatible means that market participants voluntarily and truthfully state their valuation of ancillary services. We show that this approach guarantees revenue sufficiency for the system operator and provides financial incentives for balance responsible parties to keep imbalances close to zero.",
"title": ""
},
{
"docid": "5093e3d152d053a9f3322b34096d3e4e",
"text": "To create conversational systems working in actual situations, it is crucial to assume that they interact with multiple agents. In this work, we tackle addressee and response selection for multi-party conversation, in which systems are expected to select whom they address as well as what they say. The key challenge of this task is to jointly model who is talking about what in a previous context. For the joint modeling, we propose two modeling frameworks: 1) static modeling and 2) dynamic modeling. To show benchmark results of our frameworks, we created a multi-party conversation corpus. Our experiments on the dataset show that the recurrent neural network based models of our frameworks robustly predict addressees and responses in conversations with a large number of agents.",
"title": ""
},
{
"docid": "e4ed62511669cb333b1ab97d095fda46",
"text": "This paper reports a four-element array tag antenna close to a human body for UHF Radio frequency identification (RFID) applications. The four-element array is based on PIFA grounded by vias, which can enhance the directive gain. The array antenna is fed by a four-port microstrip-line power divider. The input impedance of the power divider is designed to match with that of a Monza® 4 microchip. The parametric analysis of conjugate matching was performed and prototypes were fabricated to verify the simulated results. Experimental tests show that the maximum reading range achieved by an RFID tag equipped with the array antenna achieves about 3.9 m when the tag was mounted on a human body.",
"title": ""
},
{
"docid": "8eca353064d3b510b32c486e5f26c264",
"text": "Theoretical control algorithms are developed and an experimental system is described for 6-dof kinesthetic force/moment feedback to a human operator from a remote system. The remote system is a common six-axis slave manipulator with a force/torque sensor, while the haptic interface is a unique, cable-driven, seven-axis, force/moment-reflecting exoskeleton. The exoskeleton is used for input when motion commands are sent to the robot and for output when force/moment wrenches of contact are reflected to the human operator. This system exists at Wright-Patterson AFB. The same techniques are applicable to a virtual environment with physics models and general haptic interfaces.",
"title": ""
},
{
"docid": "8106e11ecb11ffc131a36917a60dce33",
"text": "Augmented Reality, Architecture and Ubiquity: Technologies, Theories and Frontiers",
"title": ""
},
{
"docid": "dcd919590e0b6b52ea3a6be7378d5d25",
"text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.",
"title": ""
},
{
"docid": "17a475b655134aafde0f49db06bec127",
"text": "Estimating the number of persons in a public place provides useful information for video-based surveillance and monitoring applications. In the case of oblique camera setup, counting is either achieved by detecting individuals or by statistically establishing relations between values of simple image features (e.g. amount of moving pixels, edge density, etc.) to the number of people. While the methods of the first category exhibit poor accuracy in cases of occlusions, the second category of methods are sensitive to perspective distortions, and require people to move in order to be counted. In this paper we investigate the possibilities of developing a robust statistical method for people counting. To maximize its applicability scope, we choose-in contrast to the majority of existing methods from this category-not to require prior learning of categories corresponding to different number of people. Second, we search for a suitable way of correcting the perspective distortion. Finally, we link the estimation to a confidence value that takes into account the known factors being of influence on the result. The confidence is then used to refine final results.",
"title": ""
},
{
"docid": "e327e992a6973a91d84573390920c48f",
"text": "The research regarding Web information extraction focuses on learning rules to extract some selected information from Web documents. Many proposals are ad hoc and cannot benefit from the advances in machine learning; furthermore, they are likely to fade away as the Web evolves, and their intrinsic assumptions are not satisfied. Some authors have explored transforming Web documents into relational data and then using techniques that got inspiration from inductive logic programming. In theory, such proposals should be easier to adapt as the Web evolves because they build on catalogues of features that can be adapted without changing the proposals themselves. Unfortunately, they are difficult to scale as the number of documents or features increases. In the general field of machine learning, there are propositio-relational proposals that attempt to provide effective and efficient means to learn from relational data using propositional techniques, but they have seldom been explored regarding Web information extraction. In this article, we present a new proposal called Roller: it relies on a search procedure that uses a dynamic flattening technique to explore the context of the nodes that provide the information to be extracted; it is configured with an open catalogue of features, so that it can adapt to the evolution of the Web; it also requires a base learner and a rule scorer, which helps it benefit from the continuous advances in machine learning. Our experiments confirm that it outperforms other state-of-the-art proposals in terms of effectiveness and that it is very competitive in terms of efficiency; we have also confirmed that our conclusions are solid from a statistical point of view.",
"title": ""
},
{
"docid": "65e3890edd57a0a6de65b4e38f3cea1c",
"text": "This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an `1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of `1-analysis for such problems.",
"title": ""
},
{
"docid": "f840350d14a99f3da40729cfe6d56ef5",
"text": "This paper presents a sub-radix-2 redundant architecture to improve the performance of switched-capacitor successive-approximation-register (SAR) analog-to-digital converters (ADCs). The redundancy not only guarantees digitally correctable static nonlinearities of the converter, it also offers means to combat dynamic errors in the conversion process, and thus, accelerating the speed of the SAR architecture. A perturbation-based digital calibration technique is also described that closely couples with the architecture choice to accomplish simultaneous identification of multiple capacitor mismatch errors of the ADC, enabling the downsizing of all sampling capacitors to save power and silicon area. A 12-bit prototype measured a Nyquist 70.1-dB signal-to-noise-plus-distortion ratio (SNDR) and a Nyquist 90.3-dB spurious free dynamic range (SFDR) at 22.5 MS/s, while dissipating 3.0-mW power from a 1.2-V supply and occupying 0.06-mm2 silicon area in a 0.13-μm CMOS process. The figure of merit (FoM) of this ADC is 51.3 fJ/step measured at 22.5 MS/s and 36.7 fJ/step at 45 MS/s.",
"title": ""
}
] |
scidocsrr
|
de77371315486fb23f2bc140a6c02d0c
|
The Natural Language Decathlon: Multitask Learning as Question Answering
|
[
{
"docid": "6605f7e07bed0a173dececa1aa94f809",
"text": "Abstractive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-tosequence models. However, these models can still benefit from stronger natural language inference skills, since a correct summary is logically entailed by the input document, i.e., it should not contain any contradictory or unrelated information. We incorporate such knowledge into an abstractive summarization model via multi-task learning, where we share its decoder parameters with those of an entailment generation model. We achieve promising initial improvements based on multiple metrics and datasets (including a test-only setting). The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.ive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-tosequence models. However, these models can still benefit from stronger natural language inference skills, since a correct summary is logically entailed by the input document, i.e., it should not contain any contradictory or unrelated information. We incorporate such knowledge into an abstractive summarization model via multi-task learning, where we share its decoder parameters with those of an entailment generation model. We achieve promising initial improvements based on multiple metrics and datasets (including a test-only setting). The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.",
"title": ""
},
{
"docid": "e5bf05ae6700078dda83eca8d2f65cd4",
"text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.",
"title": ""
},
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
}
] |
[
{
"docid": "f723f2d583c6313396db195876876f98",
"text": "After decades of continuous scaling, further advancement of silicon microelectronics across the entire spectrum of computing applications is today limited by power dissipation. While the trade-off between power and performance is well-recognized, most recent studies focus on the extreme ends of this balance. By concentrating instead on an intermediate range, an ~ 8× improvement in power efficiency can be attained without system performance loss in parallelizable applications-those in which such efficiency is most critical. It is argued that power-efficient hardware is fundamentally limited by voltage scaling, which can be achieved only by blurring the boundaries between devices, circuits, and systems and cannot be realized by addressing any one area alone. By simultaneously considering all three perspectives, the major issues involved in improving power efficiency in light of performance and area constraints are identified. Solutions for the critical elements of a practical computing system are discussed, including the underlying logic device, associated cache memory, off-chip interconnect, and power delivery system. The IBM Blue Gene system is then presented as a case study to exemplify several proposed directions. Going forward, further power reduction may demand radical changes in device technologies and computer architecture; hence, a few such promising methods are briefly considered.",
"title": ""
},
{
"docid": "e0e33d26cc65569e80213069cb5ad857",
"text": "Capsule Networks have great potential to tackle problems in structural biology because of their aention to hierarchical relationships. is paper describes the implementation and application of a Capsule Network architecture to the classication of RAS protein family structures on GPU-based computational resources. e proposed Capsule Network trained on 2D and 3D structural encodings can successfully classify HRAS and KRAS structures. e Capsule Network can also classify a protein-based dataset derived from a PSI-BLAST search on sequences of KRAS and HRAS mutations. Our results show an accuracy improvement compared to traditional convolutional networks, while improving interpretability through visualization of activation vectors.",
"title": ""
},
{
"docid": "cad2742f731edaf67924ce002d9a1f94",
"text": "Output impedance of active-clamp converters is a valid method to achieve current sharing among parallel-connected power stages. Nevertheless, parasitic capacitances result in resonances that modify converter behavior and current balance. A solution is presented and validated. The current balance is achieved without a dedicated control.",
"title": ""
},
{
"docid": "0f927fc7b8005ee6bb6ec22d8070a062",
"text": "We propose a Dynamic-Spatial-Attention (DSA) Recurrent Neural Network (RNN) for anticipating accidents in dashcam videos (Fig. 1). Our DSA-RNN learns to (1) distribute soft-attention to candidate objects dynamically to gather subtle cues and (2) model the temporal dependencies of all cues to robustly anticipate an accident. Anticipating accidents is much less addressed than anticipating events such as changing a lane, making a turn, etc., since accidents are rare to be observed and can happen in many different ways mostly in a sudden. To overcome these challenges, we (1) utilize state-of-the-art object detector [3] to detect candidate objects, and (2) incorporate full-frame and object-based appearance and motion features in our model. We also harvest a diverse dataset of 678 dashcam accident videos on the web (Fig. 3). The dataset is unique, since various accidents (e.g., a motorbike hits a car, a car hits another car, etc.) occur in all videos. We manually mark the time-location of accidents and use them as supervision to train and evaluate our method. We show that our method anticipates accidents about 2 seconds before they occur with 80% recall and 56.14% precision. Most importantly, it achieves the highest mean average precision (74.35%) outperforming other baselines without attention or RNN. 2 Fu-Hsiang Chan, Yu-Ting Chen, Yu Xiang, Min Sun",
"title": ""
},
{
"docid": "95fbf262f9e673bd646ad7e02c5cbd53",
"text": "Department of Finance Stern School of Business and NBER, New York University, 44 W. 4th Street, New York, NY 10012; mkacperc@stern.nyu.edu; http://www.stern.nyu.edu/∼mkacperc. Department of Finance Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; svnieuwe@stern.nyu.edu; http://www.stern.nyu.edu/∼svnieuwe. Department of Economics Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; lveldkam@stern.nyu.edu; http://www.stern.nyu.edu/∼lveldkam. We thank John Campbell, Joseph Chen, Xavier Gabaix, Vincent Glode, Ralph Koijen, Jeremy Stein, Matthijs van Dijk, and seminar participants at NYU Stern (economics and finance), Harvard Business School, Chicago Booth, MIT Sloan, Yale SOM, Stanford University (economics and finance), University of California at Berkeley (economics and finance), UCLA economics, Duke economics, University of Toulouse, University of Vienna, Australian National University, University of Melbourne, University of New South Wales, University of Sydney, University of Technology Sydney, Erasmus University, University of Mannheim, University of Alberta, Concordia, Lugano, the Amsterdam Asset Pricing Retreat, the Society for Economic Dynamics meetings in Istanbul, CEPR Financial Markets conference in Gerzensee, UBC Summer Finance conference, and Econometric Society meetings in Atlanta for useful comments and suggestions. Finally, we thank the Q-group for their generous financial support.",
"title": ""
},
{
"docid": "1d356c920fb720252d827164752dffe5",
"text": "In the early days of machine learning, Donald Michie introduced two orthogonal dimensions to evaluate performance of machine learning approaches – predictive accuracy and comprehensibility of the learned hypotheses. Later definitions narrowed the focus to measures of accuracy. As a consequence, statistical/neuronal approaches have been favoured over symbolic approaches to machine learning, such as inductive logic programming (ILP). Recently, the importance of comprehensibility has been rediscovered under the slogan ‘explainable AI’. This is due to the growing interest in black-box deep learning approaches in many application domains where it is crucial that system decisions are transparent and comprehensible and in consequence trustworthy. I will give a short history of machine learning research followed by a presentation of two specific approaches of symbolic machine learning – inductive logic programming and end-user programming. Furthermore, I will present current work on explanation generation. Die Arbeitsweise der Algorithmen, die über uns entscheiden, muss transparent gemacht werden, und wir müssen die Möglichkeit bekommen, die Algorithmen zu beeinflussen. Dazu ist es unbedingt notwendig, dass die Algorithmen ihre Entscheidung begründen! Peter Arbeitsloser zu John of Us, Qualityland, Marc-Uwe Kling, 2017",
"title": ""
},
{
"docid": "693ce623f4f5b2cdd2eb6f4c45603524",
"text": "Metabolomics is perhaps the ultimate level of post-genomic analysis as it can reveal changes in metabolite fluxes that are controlled by only minor changes within gene expression measured using transcriptomics and/or by analysing the proteome that elucidates post-translational control over enzyme activity. Metabolic change is a major feature of plant genetic modification and plant interactions with pathogens, pests, and their environment. In the assessment of genetically modified plant tissues, metabolomics has been used extensively to explore by-products resulting from transgene expression and scenarios of substantial equivalence. Many studies have concentrated on the physiological development of plant tissues as well as on the stress responses involved in heat shock or treatment with stress-eliciting molecules such as methyl jasmonic acid, yeast elicitor or bacterial lipopolysaccharide. Plant-host interactions represent one of the most biochemically complex and challenging scenarios that are currently being assessed by metabolomic approaches. For example, the mixtures of pathogen-colonised and non-challenged plant cells represent an extremely heterogeneous and biochemically rich sample; there is also the further complication of identifying which metabolites are derived from the plant host and which are from the interacting pathogen. This review will present an overview of the analytical instrumentation currently applied to plant metabolomic analysis, literature within the field will be reviewed paying particular regard to studies based on plant-host interactions and finally the future prospects on the metabolomic analysis of plants and plant-host interactions will be discussed.",
"title": ""
},
{
"docid": "426c61637ea724f81b2f1f1b63094095",
"text": "Cancer is the general name for a group of more than 100 diseases. Although cancer includes different types of diseases, they all start because abnormal cells grow out of control. Without treatment, cancer can cause serious health problems and even loss of life. Early detection of cancer may reduce mortality and morbidity. This paper presents a review of the detection methods for lung, breast, and brain cancers. These methods used for diagnosis include artificial intelligence techniques, such as support vector machine neural network, artificial neural network, fuzzy logic, and adaptive neuro-fuzzy inference system, with medical imaging like X-ray, ultrasound, magnetic resonance imaging, and computed tomography scan images. Imaging techniques are the most important approach for precise diagnosis of human cancer. We investigated all these techniques to identify a method that can provide superior accuracy and determine the best medical images for use in each type of cancer.",
"title": ""
},
{
"docid": "5611107338100a2d202f7dbde5fd39ac",
"text": "This experiment investigated the ability of rats with dorsal striatal or fornix damage to learn the location of a visible platform in a water maze. We also assessed the animals' ability to find the platform when it was hidden (submerged). Rats with neurotoxic damage to the dorsal striatum acquired both the visible and hidden platform versions of the task, but when required to choose between the spatial location they had learned and the visible platform in a new location they swam first to the old spatial location. Rats with radio-frequency damage to the fornix acquired the visible platform version of the water maze task but failed to learn about the platform's location in space. When the visible platform was moved to a new location they swam directly to it. Normal rats acquired both the visible and hidden platform versions of the task. These findings suggest that in the absence of a functional neural system that includes dorsal striatum, spatial information predominantly controlled behavior even in the presence of a cue that the animals had previously been reinforced for approaching. In the absence of a functional hippocampal system behavior was not affected by spatial information and responding to local reinforced cues was enhanced. The results support the idea that different neural substrates in the mammalian nervous system acquire different types of information simultaneously and in parallel.",
"title": ""
},
{
"docid": "99c6fb7c765bf749fd40a78eadf3e723",
"text": "This paper presents a new design approach to nonlinear observers for Itô stochastic nonlinear systems with guaranteed stability. A stochastic contraction lemma is presented which is used to analyze incremental stability of the observer. A bound on the mean-squared distance between the trajectories of original dynamics and the observer dynamics is obtained as a function of the contraction rate and maximum noise intensity. The observer design is based on a non-unique state-dependent coefficient (SDC) form, which parametrizes the nonlinearity in an extended linear form. The observer gain synthesis algorithm, called linear matrix inequality state-dependent algebraic Riccati equation (LMI-SDARE), is presented. The LMI-SDARE uses a convex combination of multiple SDC parametrizations. An optimization problem with state-dependent linear matrix inequality (SDLMI) constraints is formulated to select the coefficients of the convex combination for maximizing the convergence rate and robustness against disturbances. Two variations of LMI-SDARE algorithm are also proposed. One of them named convex state-dependent Riccati equation (CSDRE) uses a chosen convex combination of multiple SDC matrices; and the other named Fixed-SDARE uses constant SDC matrices that are pre-computed by using conservative bounds of the system states while using constant coefficients of the convex combination pre-computed by a convex LMI optimization problem. A connection between contraction analysis and L2 gain of the nonlinear system is established in the presence of noise and disturbances. Results of simulation show superiority of the LMI-SDARE algorithm to the extended Kalman filter (EKF) and state-dependent differential Riccati equation (SDDRE) filter.",
"title": ""
},
{
"docid": "5455e7d53e6de4cbe97cbcdf6eea9806",
"text": "OBJECTIVE\nTo evaluate the clinical and radiological results in the surgical treatment of moderate and severe hallux valgus by performing percutaneous double osteotomy.\n\n\nMATERIAL AND METHOD\nA retrospective study was conducted on 45 feet of 42 patients diagnosed with moderate-severe hallux valgus, operated on in a single centre and by the same surgeon from May 2009 to March 2013. Two patients were lost to follow-up. Clinical and radiological results were recorded.\n\n\nRESULTS\nAn improvement from 48.14 ± 4.79 points to 91.28 ± 8.73 points was registered using the American Orthopedic Foot and Ankle Society (AOFAS) scale. A radiological decrease from 16.88 ± 2.01 to 8.18 ± 3.23 was observed in the intermetatarsal angle, and from 40.02 ± 6.50 to 10.51 ± 6.55 in hallux valgus angle. There was one case of hallux varus, one case of non-union, a regional pain syndrome type I, an infection that resolved with antibiotics, and a case of loosening of the osteosynthesis that required an open surgical refixation.\n\n\nDISCUSSION\nPercutaneous distal osteotomy of the first metatarsal when performed as an isolated procedure, show limitations when dealing with cases of moderate and severe hallux valgus. The described technique adds the advantages of minimally invasive surgery by expanding applications to severe deformities.\n\n\nCONCLUSIONS\nPercutaneous double osteotomy is a reproducible technique for correcting severe deformities, with good clinical and radiological results with a complication rate similar to other techniques with the advantages of shorter surgical times and less soft tissue damage.",
"title": ""
},
{
"docid": "cc204a8e12f47259059488bb421f8d32",
"text": "Phishing is a web-based attack that uses social engineering techniques to exploit internet users and acquire sensitive data. Most phishing attacks work by creating a fake version of the real site's web interface to gain the user's trust.. We applied different methods for detecting phishing using known as well as new features. In this we used the heuristic-based approach to handle phishing attacks, in this approached several website features are collected and used to identify the type of the website. The heuristic-based approach can recognize newly created fake websites in real-time. One intelligent approach based on genetic algorithm seems a potential solution that may effectively detect phishing websites with high accuracy and prevent it by blocking them.",
"title": ""
},
{
"docid": "db5ff75a7966ec6c1503764d7e510108",
"text": "Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.",
"title": ""
},
{
"docid": "f3e3e42495f474973b3c7508e82bb18a",
"text": "Compared with horizontal solar still, vertical solar still has better condensation but lower evaporation. To increase the evaporation, it needs a control system that moves a vertical solar still to follow the azimuth angle of the sun. This paper presents a design of a GPS-based solar tracker system, that can moves the vertical solar still follow the azimuth angle of the sun. Furthermore, to determine the effect of this solar tracker system, research was done by comparing the power of 2 solar cells placed in different positions. The first solar cell was placed upright and rotated with the solar tracker system. The second solar cell was placed horizontally. The result showed that the power generated by the first solar cell is greater than the second solar cell.",
"title": ""
},
{
"docid": "3e18a760083cd3ed169ed8dae36156b9",
"text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making",
"title": ""
},
{
"docid": "34690f455f9e539b06006f30dd3e512b",
"text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.",
"title": ""
},
{
"docid": "3d8a6068f48e9a091c1ad7059890cfba",
"text": "Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) ~20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.",
"title": ""
},
{
"docid": "10ee57480485050a1bb52dcb9203bd26",
"text": "This paper investigates and evaluates coupled inductors (CIs) in the interleaved multiphase three-level dc-dc converter. If non-CIs are used in the multiphase three-level dc-dc converter, interleaving operation of the converter will increase inductor current ripple, although the overall output current ripple and common-mode (CM) voltage will become smaller. To reduce inductor current ripple, inverse-CIs are employed. The current ripple in the CI is analyzed in detail. The benefits of the three-level dc-dc converter with CIs under interleaving operation are evaluated. By adding CIs and working under interleaving operation, smaller inductor current ripple, smaller overall output current ripple, and smaller CM voltage can be achieved simultaneously compared with the noninterleaving case. The analysis results are verified by simulations and 10 kW scale-down experiments.",
"title": ""
},
{
"docid": "4437a0241b825fddd280517b9ae3565a",
"text": "The levels of pregnenolone, dehydroepiandrosterone (DHA), androstenedione, testosterone, dihydrotestosterone (DHT), oestrone, oestradiol, cortisol and luteinizing hormone (LH) were measured in the peripheral plasma of a group of young, apparently healthy males before and after masturbation. The same steroids were also determined in a control study, in which the psychological antipation of masturbation was encouraged, but the physical act was not carried out. The plasma levels of all steroids were significantly increased after masturbation, whereas steroid levels remained unchanged in the control study. The most marked changes after masturbation were observed in pregnenolone and DHA levels. No alterations were observed in the plasma levels of LH. Both before and after masturbation plasma levels of testosterone were significantly correlated to those of DHT and oestradiol, but not to those of the other steroids studied. On the other hand, cortisol levels were significantly correlated to those of pregnenolone, DHA, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT in seminal plasma were also estimated; they were all significantly correlated to the levels of the corresponding steroid in the systemic blood withdrawn both before and after masturbation. As a practical consequence, the results indicate that whenever both blood and semen are analysed, blood sampling must precede semen collection.",
"title": ""
},
{
"docid": "a53fd98780baa0830813543d5e246a63",
"text": "This paper covers a sales forecasting problem on e-commerce sites. To predict product sales, we need to understand customers’ browsing behavior and identify whether it is for purchase purpose or not. For this goal, we propose a new customer model, B2P, of aggregating predictive features extracted from customers’ browsing history. We perform experiments on a real world e-commerce site and show that sales predictions by our model are consistently more accurate than those by existing state-of-the-art baselines.",
"title": ""
}
] |
scidocsrr
|
e75df339d1e902ae2386fb414b64448d
|
Hierarchical Feature Extraction With Local Neural Response for Image Recognition
|
[
{
"docid": "2a56702663e6e52a40052a5f9b79a243",
"text": "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.",
"title": ""
}
] |
[
{
"docid": "e2b4b6bfe6ee9cc4caa4828e7eb4bf5d",
"text": "The translation course consists of an introductory lecture followed by twelve hours of seminars held over one semester. It is complemented by a nine-hour course that presents an overview of the English legal system and some aspects of comparative law. Authentic English-language documents (contracts, guarantees, judgments, summonses etc.) are studied and subsequently used as models for translation into English.",
"title": ""
},
{
"docid": "351beace260a731aaf8dcf6e6870ad99",
"text": "The field of Explainable Artificial Intelligence has taken steps towards increasing transparency in the decision-making process of machine learning models for classification tasks. Understanding the reasons behind the predictions of models increases our trust in them and lowers the risks of using them. In an effort to extend this to other tasks apart from classification, this thesis explores the interpretability aspect for sequence tagging models for the task of Named Entity Recognition (NER). This work proposes two approaches for adapting LIME, an interpretation method for classification, to sequence tagging and NER. The first approach is a direct adaptation of LIME to the task, while the second includes adaptations following the idea that entities are conceived as a group of words and we would like one explanation for the whole entity. Given the challenges in the evaluation of the interpretation method, this work proposes an extensive evaluation from different angles. It includes a quantitative analysis using the AOPC metric; a qualitative analysis that studies the explanations at instance and dataset levels as well as the semantic structure of the embeddings and the explanations; and a human evaluation to validate the model's behaviour. The evaluation has discovered patterns and characteristics to take into account when explaining NER models.",
"title": ""
},
{
"docid": "ad314a0a92c130a9b8550ac0fbc04d9d",
"text": "Beginning at either 1.5, 6 or 10 months of age, male mice from the A/J and C57BL/6J strains and their F1 hybrid, B6AF1/J were fed a diet (4.2 kcal/g) either ad libitum every day or in a restricted fashion by ad libitum feeding every other day. Relative to estimates for ad libitum controls, the body weights of the intermittently-fed restricted C57BL/6J and hybrid mice were reduced and mean and maximum life span were incremented when the every-other-day regimen was initiated at 1.5 or 6 months of age. When every-other-day feeding was introduced at 10 months of age, again both these genotypes lost body weight relative to controls; however, mean life span was not significantly affected although maximum life span was increased. Among A/J mice, intermittent feeding did not reduce body weight relative to ad libitum controls when introduced at 1.5 or 10 months of age; however, this treatment did increase mean and maximum life span when begun at 1.5 months, while it decreased mean and maximum life span when begun at 10 months. When restricted feeding was introduced to this genotype at 6 months of age, body weight reduction compared to control values was apparent at some ages, but the treatment had no significant effects on mean or maximum life span. These results illustrate that the effects of particular regimens of dietary restriction on body weight and life span are greatly dependent upon the genotype and age of initiation. Moreover, when examining the relationship of body weight to life span both between and within the various groups, it was clear that the complexity of this relationship made it difficult to predict that lower body weight would induce life span increment.",
"title": ""
},
{
"docid": "d2c202e120fecf444e77b08bd929e296",
"text": "Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker interpolation by adding a new output layer (α-layer) on top of the multi-output branches. An identifying code is injected into the layer together with acoustic features of many speakers. Experiments show that the α-layer can effectively learn to interpolate the acoustic features between speakers.",
"title": ""
},
{
"docid": "025953bb13772965bd757216f58d2bed",
"text": "Designers use third-party intellectual property (IP) cores and outsource various steps in their integrated circuit (IC) design flow, including fabrication. As a result, security vulnerabilities have been emerging, forcing IC designers and end-users to reevaluate their trust in hardware. If an attacker gets hold of an unprotected design, attacks such as reverse engineering, insertion of malicious circuits, and IP piracy are possible. In this paper, we shed light on the vulnerabilities in very large scale integration (VLSI) design and fabrication flow, and survey design-for-trust (DfTr) techniques that aim at regaining trust in IC design. We elaborate on four DfTr techniques: logic encryption, split manufacturing, IC camouflaging, and Trojan activation. These techniques have been developed by reusing VLSI test principles.",
"title": ""
},
{
"docid": "79b26ac97deb39c4de11a87604003f26",
"text": "This paper presents a novel wheel-track-Leg hybrid Locomotion Mechanism that has a compact structure. Compared to most robot wheels that have a rigid round rim, the transformable wheel with a flexible rim can switch to track mode for higher efficiency locomotion on swampy terrain or leg mode for better over-obstacle capability on rugged road. In detail, the wheel rim of this robot is cut into four end-to-end circles to make it capable of transforming between a round circle with a flat ring (just like “O” and “∞”) to change the contact type between transformable wheels with the ground. The transformation principle and constraint conditions between different locomotion modes are explained. The driving methods and locomotion strategies on various terrains of the robot are analyzed. Meanwhile, an initial experiment is conducted to verify the design.",
"title": ""
},
{
"docid": "91cf217b2c5fa968bc4e893366ec53e1",
"text": "Importance\nPostpartum hypertension complicates approximately 2% of pregnancies and, similar to antepartum severe hypertension, can have devastating consequences including maternal death.\n\n\nObjective\nThis review aims to increase the knowledge and skills of women's health care providers in understanding, diagnosing, and managing hypertension in the postpartum period.\n\n\nResults\nHypertension complicating pregnancy, including postpartum, is defined as systolic blood pressure 140 mm Hg or greater and/or diastolic blood pressure 90 mm Hg or greater on 2 or more occasions at least 4 hours apart. Severe hypertension is defined as systolic blood pressure 160 mm Hg or greater and/or diastolic blood pressure 110 mm Hg or greater on 2 or more occasions repeated at a short interval (minutes). Workup for secondary causes of hypertension should be pursued, especially in patients with severe or resistant hypertension, hypokalemia, abnormal creatinine, or a strong family history of renal disease. Because severe hypertension is known to cause maternal stroke, women with severe hypertension sustained over 15 minutes during pregnancy or in the postpartum period should be treated with fast-acting antihypertension medication. Labetalol, hydralazine, and nifedipine are all effective for acute management, although nifedipine may work the fastest. For persistent postpartum hypertension, a long-acting antihypertensive agent should be started. Labetalol and nifedipine are also both effective, but labetalol may achieve control at a lower dose with fewer adverse effects.\n\n\nConclusions and Relevance\nProviders must be aware of the risks associated with postpartum hypertension and educate women about the symptoms of postpartum preeclampsia. Severe acute hypertension should be treated in a timely fashion to avoid morbidity and mortality. Women with persistent postpartum hypertension should be administered a long-acting antihypertensive agent.\n\n\nTarget Audience\nObstetricians and gynecologists, family physicians.\n\n\nLearning Objectives\nAfter completing this activity, the learner should be better able to assist patients and providers in identifying postpartum hypertension; provide a framework for the evaluation of new-onset postpartum hypertension; and provide instructions for the management of acute severe and persistent postpartum hypertension.",
"title": ""
},
{
"docid": "0e012c89f575d116e94b1f6718c8fe4d",
"text": "Tagging is an increasingly important task in natural language processing domains. As there are many natural language processing tasks which can be improved by applying disambiguation to the text, fast and high quality tagging algorithms are a crucial task in information retrieval and question answering. Tagging aims to assigning to each word of a text its correct tag according to the context in which the word is used. Part Of Speech (POS) tagging is a difficult problem by itself, since many words has a number of possible tags associated to it. In this paper we present a novel algorithm that deals with POS-tagging problem based on Harmony Search (HS) optimization method. This paper analyzes the relative advantages of HS metaheuristic approache to the well-known natural language processing problem of POS-tagging. In the experiments we conducted, we applied the proposed algorithm on linguistic corpora and compared the results obtained against other optimization methods such as genetic and simulated annealing algorithms. Experimental results reveal that the proposed algorithm provides more accurate results compared to the other algorithms.",
"title": ""
},
{
"docid": "994fcd84c9f2d75df6388cfe5ea33d06",
"text": "In this paper, we present a modeling and monitoring scheme of the friction between the wafer and polishing pad for the linear chemical-mechanical planarization (CMP) processes. Kinematic analysis of the linear CMP system is investigated and a distributed LuGre dynamic friction model is utilized to capture the friction forces generated by the wafer/pad interactions. We present an experimental validation of wafer/pad friction modeling and analysis. Pad conditioning and wafer film topography effects on the wafer/pad friction are also experimentally demonstrated. Finally, one application example is illustrated the use of friction torques for real-time monitoring the shallow trench isolation (STI) CMP processes.",
"title": ""
},
{
"docid": "9c8bc65635a9c8f0d8caf510399377f4",
"text": "El autor es José Luis Ortega, investigador del CSIC y miembro del Laboratorio de Cibermetría, que cuenta con una importante trayectoria investigadora con publicaciones nacionales e internacionales en el ámbito de la cibermetría, la visualización de información y el análisis de redes. La obra está escrita en un inglés claro y sencillo y su título refleja de forma precisa su contenido: los motores de búsqueda académicos.",
"title": ""
},
{
"docid": "2575bad473ef55281db460617e0a37c8",
"text": "Automated license plate recognition (ALPR) has been applied to identify vehicles by their license plates and is critical in several important transportation applications. In order to achieve the recognition accuracy levels typically required in the market, it is necessary to obtain properly segmented characters. A standard method, projection-based segmentation, is challenged by substantial variation across the plate in the regions surrounding the characters. In this paper a reinforcement learning (RL) method is adapted to create a segmentation agent that can find appropriate segmentation paths that avoid characters, traversing from the top to the bottom of a cropped license plate image. Then a hybrid approach is proposed, leveraging the speed and simplicity of the projection-based segmentation technique along with the power of the RL method. The results of our experiments show significant improvement over the histogram projection currently used for character segmentation.",
"title": ""
},
{
"docid": "a1726552412dfea08753a367bce65720",
"text": "The existence and diversity of human cultures are made possible by our species-specific cognitive capacities. But how? Do cultures emerge and diverge as a result of the deployment, over generations and in different populations, of general abilities to learn, imitate and communicate? What role if any do domain-specific evolved cognitive abilities play in the emergence and evolution of cultures? These questions have been approached from different vantage points in different disciplines. Here we present a view that is currently developing out of the converging work of developmental psychologists, evolutionary psychologists and cognitive anthropologists.",
"title": ""
},
{
"docid": "08951a16123c26f5ac4241457b539454",
"text": "High quality, physically accurate rendering at interactiv e rates has widespread application, but is a daunting task. We attempt t o bridge the gap between high-quality offline and interactive render ing by using existing environment mapping hardware in combinatio with a novel Image Based Rendering (IBR) algorithm. The primary c ontribution lies in performing IBR in reflection space. This me thod can be applied to ordinary environment maps, but for more phy sically accurate rendering, we apply reflection space IBR to ra diance environment maps. A radiance environment map pre-integrat s Bidirectional Reflection Distribution Function (BRDF) wit h a lighting environment. Using the reflection-space IBR algorithm o n radiance environment maps allows interactive rendering of ar bitr ry objects with a large class of complex BRDFs in arbitrary ligh ting environments. The ultimate simplicity of the final algor ithm suggests that it will be widely and immediately valuable giv en the ready availability of hardware assisted environment mappi ng. CR categories and subject descriptors: I.3.3 [Computer Graphics]: Picture/Image generation; I.3.7 [Image Proces sing]: Enhancement.",
"title": ""
},
{
"docid": "99b2cf752848a5b787b378719dc934f1",
"text": "This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.",
"title": ""
},
{
"docid": "4b0b59d137fad3c6a07cb15ac916de3c",
"text": "We describe a novel method for blind, single-image spectral super-resolution. While conventional superresolution aims to increase the spatial resolution of an input image, our goal is to spectrally enhance the input, i.e., generate an image with the same spatial resolution, but a greatly increased number of narrow (hyper-spectral) wavelength bands. Just like the spatial statistics of natural images has rich structure, which one can exploit as prior to predict high-frequency content from a low resolution image, the same is also true in the spectral domain: the materials and lighting conditions of the observed world induce structure in the spectrum of wavelengths observed at a given pixel. Surprisingly, very little work exists that attempts to use this diagnosis and achieve blind spectral super-resolution from single images. We start from the conjecture that, just like in the spatial domain, we can learn the statistics of natural image spectra, and with its help generate finely resolved hyper-spectral images from RGB input. Technically, we follow the current best practice and implement a convolutional neural network (CNN), which is trained to carry out the end-to-end mapping from an entire RGB image to the corresponding hyperspectral image of equal size. We demonstrate spectral super-resolution both for conventional RGB images and for multi-spectral satellite data, outperforming the state-of-the-art.",
"title": ""
},
{
"docid": "bf126b871718a5ee09f1e54ea5052d20",
"text": "Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel DenseNet based FCN architecture for cardiac segmentation which is parameter and memory efficient. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures. In order to process the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and Dice loss leading to qualitative improvements in segmentation. We demonstrate computational efficacy of incorporating conventional computer vision techniques for region of interest detection in an end-to-end deep learning based segmentation framework. From the segmentation maps we extract clinically relevant cardiac parameters and hand-craft features which reflect the clinical diagnostic analysis and train an ensemble system for cardiac disease classification. We validate our proposed network architecture on three publicly available datasets, namely: (i) Automated Cardiac Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular segmentation challenge (LV-2011), (iii) 2015 Kaggle Data Science Bowl cardiac challenge data. Our approach in ACDC-2017 challenge stood second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100% on a limited testing set (n=50). In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. In the Kaggle challenge our approach for LV volume gave a Continuous Ranked Probability Score (CRPS) of 0.0127, which would have placed us tenth in the original challenge. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computationally efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application.",
"title": ""
},
{
"docid": "026ccaa9af38ee0eccbb80d8ce0243c2",
"text": "Community assembly provides a conceptual foundation for understanding the processes that determine which and how many species live in a particular locality. Evidence suggests that community assembly often leads to a single stable equilibrium, such that the conditions of the environment and interspecific interactions determine which species will exist there. In such cases, regions of local communities with similar environmental conditions should have similar community composition. Other evidence suggests that community assembly can lead to multiple stable equilibria. Thus, the resulting community depends on the assembly history, even when all species have access to the community. In these cases, a region of local communities with similar environmental conditions can be very dissimilar in their community composition. Both regional and local factors should determine the patterns by which communities assemble, and the resultant degree of similarity or dissimilarity among localities with similar environments. A single equilibrium in more likely to be realized in systems with small regional species pools, high rates of connectance, low productivity and high disturbance. Multiple stable equilibria are more likely in systems with large regional species pools, low rates of connectance, high productivity and low disturbance. I illustrate preliminary evidence for these predictions from an observational study of small pond communities, and show important effects on community similarity, as well as on local and regional species richness.",
"title": ""
},
{
"docid": "b1635df751ea1658b38137bc5a6c18b9",
"text": "Visual pigments in many animal species, including stomatopod crustaceans, are adapted to the photic environments inhabited by that species. However, some species occupy a diversity of environments as adults (such as a range of depths in the ocean), and a single set of visual pigments would not be equally adaptive for all habitats in which individuals live. We characterized the visual pigment complements of three species of stomatopod crustaceans, Haptosquilla trispinosa, Gonodactylellus affinis, and Gonodactylopsis spongicola, which are unusual for this group in that each lives at depths from the subtidal to several tens of meters. Using microspectrophotometry, we determined the visual pigments in all classes of main rhabdoms in individuals of each species from shallow or deep habitats. Each species expressed the typical diversity of visual pigments commonly found in stomatopods, but there was little or no evidence of differential expression of visual pigments in animals of any species collected from different depths. Vision in these species, therefore, is not tuned to spectral characteristics of the photic environment by varying the assemblages of visual pigments appearing in their retinas.",
"title": ""
},
{
"docid": "1004d314aecd1fd13c68c6ea2db9e8bd",
"text": "Hand, foot and mouth disease (HFMD) is a highly contagious viral infection affecting young children during the spring to fall seasons. Recently, serious outbreaks of HFMD were reported frequently in the Asia-Pacific region, including China and Korea. The symptoms of HFMD are usually mild, comprising fever, loss of appetite, and a rash with blisters, which do not need specific treatment. However, there are uncommon neurological or cardiac complications such as meningitis and acute flaccid paralysis that can be fatal. HFMD is most commonly caused by infection with coxsackievirus A16, and secondly by enterovirus 71 (EV71). Many other strains of coxsackievirus and enterovirus can also cause HFMD. Importantly, HFMD caused by EV71 tends to be associated with fatal complications. Therefore, there is an urgent need to protect against EV71 infection. Development of vaccines against EV71 would be the most effective approach to prevent EV71 outbreaks. Here, we summarize EV71 infection and development of vaccines, focusing on current scientific and clinical progress.",
"title": ""
},
{
"docid": "67033d89acee89763fa1b2a06fe00dc4",
"text": "We demonstrate a novel query interface that enables users to construct a rich search query without any prior knowledge of the underlying schema or data. The interface, which is in the form of a single text input box, interacts in real-time with the users as they type, guiding them through the query construction. We discuss the issues of schema and data complexity, result size estimation, and query validity; and provide novel approaches to solving these problems. We demonstrate our query interface on two popular applications; an enterprise-wide personnel search, and a biological information database.",
"title": ""
}
] |
scidocsrr
|
d4793757d335d0616fc789e26cd2ac32
|
A0C: Alpha Zero in Continuous Action Space
|
[
{
"docid": "45940a48b86645041726120fb066a1fa",
"text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"title": ""
},
{
"docid": "d4a0b5558045245a55efbf9b71a84bc3",
"text": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.",
"title": ""
},
{
"docid": "9ec7b122117acf691f3bee6105deeb81",
"text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.",
"title": ""
}
] |
[
{
"docid": "6241da02b35863e8aa0ea08292340de5",
"text": "PmSVM (Power Mean SVM), a classifier that trains significantly faster than state-of-the-art linear and non-linear SVM solvers in large scale visual classification tasks, is presented. PmSVM also achieves higher accuracies. A scalable learning method for large vision problems, e.g., with millions of examples or dimensions, is a key component in many current vision systems. Recent progresses have enabled linear classifiers to efficiently process such large scale problems. Linear classifiers, however, usually have inferior accuracies in vision tasks. Non-linear classifiers, on the other hand, may take weeks or even years to train. We propose a power mean kernel and present an efficient learning algorithm through gradient approximation. The power mean kernel family include as special cases many popular additive kernels. Empirically, PmSVM is up to 5 times faster than LIBLINEAR, and two times faster than state-of-the-art additive kernel classifiers. In terms of accuracy, it outperforms state-of-the-art additive kernel implementations, and has major advantages over linear SVM.",
"title": ""
},
{
"docid": "37936de50a1d3fa8612a465b6644c282",
"text": "Nature uses a limited, conservative set of amino acids to synthesize proteins. The ability to genetically encode an expanded set of building blocks with new chemical and physical properties is transforming the study, manipulation and evolution of proteins, and is enabling diverse applications, including approaches to probe, image and control protein function, and to precisely engineer therapeutics. Underpinning this transformation are strategies to engineer and rewire translation. Emerging strategies aim to reprogram the genetic code so that noncanonical biopolymers can be synthesized and evolved, and to test the limits of our ability to engineer the translational machinery and systematically recode genomes.",
"title": ""
},
{
"docid": "c02fb121399e1ed82458fb62179d2560",
"text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.",
"title": ""
},
{
"docid": "aec1d7de7ddd0c9991c05611c20450e4",
"text": "A set of circles, rectangles, and convex polygons are to be cut from rectangular design plates to be produced, or from a set of stocked rectangles of known geometric dimensions. The objective is to minimize the area of the design rectangles. The design plates are subject to lower and upper bounds of their widths and lengths. The objects are free of any orientation restrictions. If all nested objects fit into one design or stocked plate the problem is formulated and solved as a nonconvex nonlinear programming problem. If the number of objects cannot be cut from a single plate, additional integer variables are needed to represent the allocation problem leading to a nonconvex mixed integer nonlinear optimization problem. This is the first time that circles and arbitrary convex polygons are treated simultaneously in this context. We present exact mathematical programming solutions to both the design and allocation problem. For small number of objects to be cut we compute globally optimal solutions. One key idea in the developed NLP and MINLP models is to use separating hyperplanes to ensure that rectangles and polygons do not overlap with each other or with the circles. Another important idea used when dealing with several resource rectangles is to develop a model formulation which connects the binary variables only to the variables representing the center of the circles or the vertices of the polytopes but not to the nonoverlap or shape constraints. We support the solution process by symmetry breaking constraints. In addition we compute lower bounds, which are constructed by a relaxed model in which each polygon is replaced by the largest circle fitting into that polygon. We have successfully applied several solution techniques to solve this problem among them the Branch&Reduce Optimization Navigator (BARON) and the LindoGlobal solver called from GAMS, and, as described in Rebennack et al. (2008, [21]), a column enumeration approach in which the columns represent the assignments. Good feasible solutions are computed within seconds or minutes usually during preprocessing. In most cases they turn out to be globally optimal. For up to 10 circles, we prove global optimality up to a gap of the order of 10 in short time. Cases with a modest number of objects, for instance, 6 circles and 3 rectangles, are also solved in short time to global optimality. For test instances involving non-rectangular polygons it is difficult to obtain small gaps. In such cases we are content to obtain gaps of the order of 10 percent.",
"title": ""
},
{
"docid": "2bda1b1482ca7b74078b10654576b24d",
"text": "A pattern recognition pipeline consists of three stages: data pre-processing, feature extraction, and classification. Traditionally, most research effort is put into extracting appropriate features. With the advent of GPU-accelerated computing and Deep Learning, appropriate features can be discovered as part of the training process. Understanding these discovered features is important: we might be able to learn something new about the domain in which our model operates, or be comforted by the fact that the model extracts “sensible” features. This work discusses and applies methods of visualizing the features learned by Convolutional Neural Networks (CNNs). Our main contribution is an extension of an existing visualization method. The extension makes the method able to visualize the features in intermediate layers of a CNN. Most notably, we show that the features extracted in the deeper layers of a CNN trained to diagnose Diabetic Retinopathy are also the features used by human clinicians. Additionally, we published our visualization method in a software package.",
"title": ""
},
{
"docid": "d7793313ab21020e79e41817b8372ee8",
"text": "We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87% of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35% relative error reduction over previous state of the art.",
"title": ""
},
{
"docid": "443a4fe9e7484a18aa53a4b142d93956",
"text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.",
"title": ""
},
{
"docid": "fe06ac2458e00c5447a255486189f1d1",
"text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.",
"title": ""
},
{
"docid": "5814f71c0fbbd1721f6c3ad948895c62",
"text": "Technological innovations made it possible to create more and more realistic figures. Such figures are often created according to human appearance and behavior allowing interaction with artificial systems in a natural and familiar way. In 1970, the Japanese roboticist Masahiro Mori observed, however, that robots and prostheses with a very – but not perfect – human-like appearance can elicit eerie, uncomfortable, and even repulsive feelings. While real people or stylized figures do not seem to evoke such negative feelings, human depictions with only minor imperfections fall into the “uncanny valley,” as Mori put it. Today, further innovations in computer graphics led virtual characters into the uncanny valley. Thus, they have been subject of a number of disciplines. For research, virtual characters created by computer graphics are particularly interesting as they are easy to manipulate and, thus, can significantly contribute to a better understanding of the uncanny valley and human perception. For designers and developers of virtual characters such as in animated movies or games, it is important to understand how the appearance and human-likeness or virtual realism influence the experience and interaction of the user and how they can create believable and acceptable avatars and virtual characters despite the uncanny valley. This work investigates these aspects and is the next step in the exploration of the uncanny valley.",
"title": ""
},
{
"docid": "baf8d2176f8c9058967fb3636022cd72",
"text": "The ability to provide assistance for a student at the appropriate level is invaluable in the learning process. Not only does it aids the student's learning process but also prevents problems, such as student frustration and floundering. Students' key demographic characteristics and their marks in a small number of written assignments can constitute the training set for a regression method in order to predict the student's performance. The scope of this work compares some of the state of the art regression algorithms in the application domain of predicting students' marks. A number of experiments have been conducted with six algorithms, which were trained using datasets provided by the Hellenic Open University. Finally, a prototype version of software support tool for tutors has been constructed implementing the M5rules algorithm, which proved to be the most appropriate among the tested algorithms.",
"title": ""
},
{
"docid": "88302ac0c35e991b9db407f268fdb064",
"text": "We propose a novel memory architecture for in-memory computation called McDRAM, where DRAM dies are equipped with a large number of multiply accumulate (MAC) units to perform matrix computation for neural networks. By exploiting high internal memory bandwidth and reducing off-chip memory accesses, McDRAM realizes both low latency and energy efficient computation. In our experiments, we obtained the chip layout based on the state-of-the-art memory, LPDDR4 where McDRAM is equipped with 2048 MACs in a single chip package with a small area overhead (4.7%). Compared with the state-of-the-art accelerator, TPU and the power-efficient GPU, Nvidia P4, McDRAM offers <inline-formula> <tex-math notation=\"LaTeX\">$9.5{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$14.4{\\times }$ </tex-math></inline-formula> speedup, respectively, in the case that the large-scale MLPs and RNNs adopt the batch size of 1. McDRAM also gives <inline-formula> <tex-math notation=\"LaTeX\">$2.1{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$3.7{\\times }$ </tex-math></inline-formula> better computational efficiency in TOPS/W than TPU and P4, respectively, for the large batches.",
"title": ""
},
{
"docid": "5a9d0e5046129bbdad435980f125db37",
"text": "The impact of channel width scaling on low-frequency noise (LFN) and high-frequency performance in multifinger MOSFETs is reported in this paper. The compressive stress from shallow trench isolation (STI) cannot explain the lower LFN in extremely narrow devices. STI top corner rounding (TCR)-induced Δ<i>W</i> is identified as an important factor that is responsible for the increase in transconductance <i>Gm</i> and the reduction in LFN with width scaling to nanoscale regime. A semi-empirical model was derived to simulate the effective mobility (μ<sub>eff</sub>) degradation from STI stress and the increase in effective width (<i>W</i><sub>eff</sub>) from Δ<i>W</i> due to STI TCR. The proposed model can accurately predict width scaling effect on <i>Gm</i> based on a tradeoff between μ<sub>eff</sub> and <i>W</i><sub>eff</sub>. The enhanced STI stress may lead to an increase in interface traps density (<i>N</i><sub>it</sub>), but the influence is relatively minor and can be compensated by the <i>W</i><sub>eff</sub> effect. Unfortunately, the extremely narrow devices suffer <i>fT</i> degradation due to an increase in <i>C</i><sub>gg</sub>. The investigation of impact from width scaling on μ<sub>eff</sub>, <i>Gm</i>, and LFN, as well as the tradeoff between LFN and high-frequency performance, provides an important layout guideline for analog and RF circuit design.",
"title": ""
},
{
"docid": "8a478da1c2091525762db35f1ac7af58",
"text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.",
"title": ""
},
{
"docid": "88fb71e503e0d0af7515dd8489061e25",
"text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "404a32f89d6273a63b7ae945514655d2",
"text": "Miniaturized minimally-invasive implants with wireless power and communication links have the potential to enable closed-loop treatments and precise diagnostics. As with wireless power transfer, robust wireless communication between implants and external transceivers presents challenges and tradeoffs with miniaturization and increasing depth. Both link efficiency and available bandwidth need to be considered for communication capacity. This paper analyzes and reviews active electromagnetic and ultrasonic communication links for implants. Example transmitter designs are presented for both types of links. Electromagnetic links for mm-sized implants have demonstrated high data rates sufficient for most applications up to Mbps range; nonetheless, they have so far been limited to depths under 5 cm. Ultrasonic links, on the other hand, have shown much deeper transmission depths, but with limited data rate due to their low operating frequency. Spatial multiplexing techniques are proposed to increase ultrasonic data rates without additional power or bandwidth.",
"title": ""
},
{
"docid": "a57aa7ff68f7259a9d9d4d969e603dcd",
"text": "Society has changed drastically over the last few years. But this is nothing new, or so it appears. Societies are always changing, just as people are always changing. And seeing as it is the people who form the societies, a constantly changing society is only natural. However something more seems to have happened over the last few years. Without wanting to frighten off the reader straight away, we can point to a diversity of social developments that indicate that the changes seem to be following each other faster, especially over the last few decades. We can for instance, point to the pluralisation (or a growing versatility), differentialisation and specialisation of society as a whole. On a more personal note, we see the diversification of communities, an emphasis on emancipation, individualisation and post-materialism and an increasing wish to live one's life as one wishes, free from social, religious or ideological contexts.",
"title": ""
},
{
"docid": "423c37020f097cf42635b0936709c7fe",
"text": "Two major goals in machine learning are the discovery of comp lex multidimensional solutions and continual improvement of existing solutions. In this paper, we argue thatcomplexification, i.e. the incremental elaboration of solutions through adding new structure, ach ieves both these goals. We demonstrate the power of complexification through the NeuroEvolution of Augmenti ng Topologies (NEAT) method, which evolves increasingly complex neural network architectures. NEAT i s applied to an open-ended coevolutionary robot duel domain where robot controllers compete head to head. Be caus the robot duel domain supports a wide range of sophisticated strategies, and because coevolutio n benefits from an escalating arms race, it serves as a suitable testbed for observing the effect of evolving in creasingly complex controllers. The result is an arms race of increasingly sophisticated strategies. When c ompared to the evolution of networks with fixed structure, complexifying networks discover significantly more sophisticated strategies. The results suggest that in order to realize the full potential of evolution, and search in general, solutions must be allowed to complexify as well as optimize.",
"title": ""
},
{
"docid": "6c92652aa5bab1b25910d16cca697d48",
"text": "Intrusion detection has attracted a considerable interest from researchers and industries. The community, after many years of research, still faces the problem of building reliable and efficient IDS that are capable of handling large quantities of data, with changing patterns in real time situations. The work presented in this manuscript classifies intrusion detection systems (IDS). Moreover, a taxonomy and survey of shallow and deep networks intrusion detection systems is presented based on previous and current works. This taxonomy and survey reviews machine learning techniques and their performance in detecting anomalies. Feature selection which influences the effectiveness of machine learning (ML) IDS is discussed to explain the role of feature selection in the classification and training phase of ML IDS. Finally, a discussion of the false and true positive alarm rates is presented to help researchers model reliable and efficient machine learning based intrusion detection systems. Keywords— Shallow network, Deep networks, Intrusion detection, False positive alarm rates and True positive alarm rates 1.0 INTRODUCTION Computer networks have developed rapidly over the years contributing significantly to social and economic development. International trade, healthcare systems and military capabilities are examples of human activity that increasingly rely on networks. This has led to an increasing interest in the security of networks by industry and researchers. The importance of Intrusion Detection Systems (IDS) is critical as networks can become vulnerable to attacks from both internal and external intruders [1], [2]. An IDS is a detection system put in place to monitor computer networks. These have been in use since the 1980’s [3]. By analysing patterns of captured data from a network, IDS help to detect threats [4]. These threats can be devastating, for example, Denial of service (DoS) denies or prevents legitimate users resource on a network by introducing unwanted traffic [5]. Malware is another example, where attackers use malicious software to disrupt systems [6].",
"title": ""
}
] |
scidocsrr
|
2491c25610c6bdffca3b04baf3ce8197
|
Structure from Motion with Objects
|
[
{
"docid": "bd4d6e83ccf5da959dac5bbc174d9d6f",
"text": "This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.",
"title": ""
}
] |
[
{
"docid": "dcd6effc28744aa875a37ad28ecc68e1",
"text": "The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.",
"title": ""
},
{
"docid": "522226d646559018812b7fec8eed26a1",
"text": "Diabetes represents one of the most common and debilitating conditions seen among Kaiser Permanente (KP) members. Because care often involves multiple providers and because follow-up requires persistence by patients and clinicians alike, ideal outcomes are often difficult to achieve. Management of diabetes therefore offers an excellent opportunity to practice population management–-a systems approach designed to ensure excellent care. Accordingly, through a broad KP collaboration, the Care Management Institute (CMI) developed a comprehensive approach to adult diabetes care: the Integrated Diabetes Care (IDC) Program. The IDC Program has three elements: an internally published report, Clinical Practice Guidelines for Adult Diabetes Care; a set of tools for applying population management and patient empowerment concepts; and an outcomes measurement component, ie, instruments for evaluating IDC Program impact and gathering feedback. In this article, we describe the IDC Program and the process by which it was developed. Included are specific examples of the tools and how they can be used at the population level and by individual clinicians in caring for patients. (top right) NEIL A. SOLOMON, MD, is the Clinical Strategies Consultant in the Care Management Institute at the Program Offices of Kaiser Permanente. His work focuses on improving quality and efficiency through the development of disease management strategies and other population-based health care innovations. He also works with Permanente physician leaders to review information on practice variation and to disseminate successful practices for internal clinical management improvement.",
"title": ""
},
{
"docid": "3f48327ca2125df3a6da0c1e54131013",
"text": "Background: We investigated the value of magnetic resonance imaging (MRI) in the evaluation of sex-reassignment surgery in male-to-female transsexual patients. Methods: Ten male-to-female transsexual patients who underwent sex-reassignment surgery with inversion of combined penile and scrotal skin flaps for vaginoplasty were examined after surgery with MRI. Turbo spin-echo T2-weighted and spin-echo T1-weighted images were obtained in sagittal, coronal, and axial planes with a 1.5-T superconductive magnet. Images were acquired with and without an inflatable silicon vaginal tutor. The following parameters were evaluated: neovaginal depth, neovaginal inclination in the sagittal plane, presence of remnants of the corpus spongiosum and corpora cavernosa, and thickness of the rectovaginal septum. Results: The average neovaginal depth was 7.9 cm (range = 5–10 cm). The neovagina had a correct oblique inclination in the sagittal plane in four patients, no inclination in five, and an incorrect inclination in one. In seven patients, MRI showed remnants of the corpora cavernosa and/or of the corpus spongiosum; in three patients, no remnants were detected. The average thickness of the rectovaginal septum was 4 mm (range = 3–6 mm). Conclusion: MRI allows a detailed assessment of the pelvic anatomy after genital reconfiguration and provides information that can help the surgeon to adopt the most correct surgical approach.",
"title": ""
},
{
"docid": "e7a9c8d3250f6937a3197f358676b679",
"text": "Virtualization is a mature technology which has shown to provide computing resource and cost optimization while enabling consolidation, isolation and hardware abstraction through the concept of virtual machine. Recently, by sharing the operating system resources and simplifying the deployment of applications, containers are getting a more and more popular alternative to virtualization for specific use cases. As a result, today these two technologies are competing to provide virtual instances for cloud computing, Network Functions Virtualization (NFV), High Performance Computing (HPC), avionic and automotive platforms. In this paper, the performance of the most important open source hypervisor (KVM and Xen) and container (Docker) solutions are compared on the ARM architecture, which is rapidly emerging in the server world. The extensive system and Input/Output (I/O) performance measurements included in this paper show a slightly better performance for containers in CPU bound workloads and request/response networking; conversely, thanks to their caching mechanisms, hypervisors perform better in most disk I/O operations and TCP streaming benchmark.",
"title": ""
},
{
"docid": "f12749ba8911e8577fbde2327c9dc150",
"text": "Regardless of successful applications of the convolutional neural networks (CNNs) in different fields, its application to seismic waveform classification and first-break (FB) picking has not been explored yet. This letter investigates the application of CNNs for classifying time-space waveforms from seismic shot gathers and picking FBs of both direct wave and refracted wave. We use representative subimage samples with two types of labeled waveform classification to supervise CNNs training. The goal is to obtain the optimal weights and biases in CNNs, which are solved by minimizing the error between predicted and target label classification. The trained CNNs can be utilized to automatically extract a set of time-space attributes or features from any subimage in shot gathers. These attributes are subsequently inputted to the trained fully connected layer of CNNs to output two values between 0 and 1. Based on the two-element outputs, a discriminant score function is defined to provide a single indication for classifying input waveforms. The FB is then located from the calculated score maps by sequentially using a threshold, the first local minimum rule of every trace and a median filter. Finally, we adopt synthetic and real shot data examples to demonstrate the effectiveness of CNNs-based waveform classification and FB picking. The results illustrate that CNN is an efficient automatic data-driven classifier and picker.",
"title": ""
},
{
"docid": "da9ad1156191f725b1a55f7b886b7746",
"text": "As the quality of natural language generated by artificial intelligence systems improves, writing interfaces can support interventions beyond grammar-checking and spell-checking, such as suggesting content to spark new ideas. To explore the possibility of machine-in-the-loop creative writing, we performed two case studies using two system prototypes, one for short story writing and one for slogan writing. Participants in our studies were asked to write with a machine in the loop or alone (control condition). They assessed their writing and experience through surveys and an open-ended interview. We collected additional assessments of the writing from Amazon Mechanical Turk crowdworkers. Our findings indicate that participants found the process fun and helpful and could envision use cases for future systems. At the same time, machine suggestions do not necessarily lead to better written artifacts. We therefore suggest novel natural language models and design choices that may better support creative writing.",
"title": ""
},
{
"docid": "1e81bb30757f4863dbde4e0a212eaa09",
"text": "This paper compares the removal performances of two complete wastewater treatment plants (WWTPs) for all priority substances listed in the Water Framework Directive and additional compounds of interest including flame retardants, surfactants, pesticides, and personal care products (PCPs) (n = 104). First, primary treatments such as physicochemical lamellar settling (PCLS) and primary settling (PS) are compared. Similarly, biofiltration (BF) and conventional activated sludge (CAS) are then examined. Finally, the removal efficiency per unit of nitrogen removed of both WWTPs for micropollutants is discussed, as nitrogenous pollution treatment results in a special design of processes and operational conditions. For primary treatments, hydrophobic pollutants (log K ow > 4) are well removed (>70 %) for both systems despite high variations of removal. PCLS allows an obvious gain of about 20 % regarding pollutant removals, as a result of better suspended solids elimination and possible coagulant impact on soluble compounds. For biological treatments, variations of removal are much weaker, and the majority of pollutants are comparably removed within both systems. Hydrophobic and volatile compounds are well (>60 %) or very well removed (>80 %) by sorption and volatilization. Some readily biodegradable molecules are better removed by CAS, indicating a better biodegradation. A better sorption of pollutants on activated sludge could be also expected considering the differences of characteristics between a biofilm and flocs. Finally, comparison of global processes efficiency using removals of micropollutants load normalized to nitrogen shows that PCLS + BF is as efficient as PS + CAS despite a higher compactness and a shorter hydraulic retention time (HRT). Only some groups of pollutants seem better removed by PS + CAS like alkylphenols, flame retardants, or di-2-ethylhexyl phthalate (DEHP), thanks to better biodegradation and sorption resulting from HRT and biomass characteristics. For both processes, and out of the 68 molecules found in raw water, only half of them are still detected in the water discharged, most of the time close to their detection limit. However, some of them are detected at higher concentrations (>1 μg/L and/or lower than environmental quality standards), which is problematic as they represent a threat for aquatic environment.",
"title": ""
},
{
"docid": "39e3056acbebeed983278c7eb2eca73f",
"text": "Various deep learning models have recently been applied to predictive modeling of Electronic Health Records (EHR). In medical claims data, which is a particular type of EHR data, each patient is represented as a sequence of temporally ordered irregularly sampled visits to health providers, where each visit is recorded as an unordered set of medical codes specifying patient's diagnosis and treatment provided during the visit. Based on the observation that different patient conditions have different temporal progression patterns, in this paper we propose a novel interpretable deep learning model, called Timeline. The main novelty of Timeline is that it has a mechanism that learns time decay factors for every medical code. This allows the Timeline to learn that chronic conditions have a longer lasting impact on future visits than acute conditions. Timeline also has an attention mechanism that improves vector embeddings of visits. By analyzing the attention weights and disease progression functions of Timeline, it is possible to interpret the predictions and understand how risks of future visits change over time. We evaluated Timeline on two large-scale real world data sets. The specific task was to predict what is the primary diagnosis category for the next hospital visit given previous visits. Our results show that Timeline has higher accuracy than the state of the art deep learning models based on RNN. In addition, we demonstrate that time decay factors and attentions learned by Timeline are in accord with the medical knowledge and that Timeline can provide a useful insight into its predictions.",
"title": ""
},
{
"docid": "6678755e7df445d7aae467b3fc21c613",
"text": "Under normality and homoscedasticity assumptions, Linear Discriminant Analysis (LDA) is known to be optimal in terms of minimising the Bayes error for binary classification. In the heteroscedastic case, LDA is not guaranteed to minimise this error. Assuming heteroscedasticity, we derive a linear classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the Bayes error for binary classification. In addition, we also propose a local neighbourhood search (LNS) algorithm to obtain a more robust classifier if the data is known to have a non-normal distribution. We evaluate the proposed classifiers on two artificial and ten real-world datasets that cut across a wide range of application areas including handwriting recognition, medical diagnosis and remote sensing, and then compare our algorithm against existing LDA approaches and other linear classifiers. The GLD is shown to outperform the original LDA procedure in terms of the classification accuracy under heteroscedasticity. While it compares favourably with other existing heteroscedastic LDA approaches, the GLD requires as much as 60 times lower training time on some datasets. Our comparison with the support vector machine (SVM) also shows that, the GLD, together with the LNS, requires as much as 150 times lower training time to achieve an equivalent classification accuracy on some of the datasets. Thus, our algorithms can provide a cheap and reliable option for classification in a lot of expert systems.",
"title": ""
},
{
"docid": "d70946cd43b73be4c68d1858bebc91fe",
"text": "A truly autonomous mobile robot have to solve the SLAM problem (i.e. simultaneous map building and pose estimation) in order to navigate in an unknown environment. Unfortunately, a universal solution for the problem hasn't been proposed yet. The tinySLAM algorithm that has a compact and clear code was designed to solve SLAM in an indoor environment using a noisy laser scanner. This paper introduces the vinySLAM method that enhances tinySLAM with the Transferable Belief Model to improve its robustness and accuracy. Proposed enhancements affect scan matching and occupancy tracking keeping simplicity and clearness of the original code. The evaluation on publicly available datasets shows significant robustness and accuracy improvements.",
"title": ""
},
{
"docid": "e7a6bb8f63e35f3fb0c60bdc26817e03",
"text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management",
"title": ""
},
{
"docid": "8a84a3376512c9d291e22ae2ffe70331",
"text": "Arguments about the existence of language-specific neural systems and particularly arguments about the independence of syntactic and semantic processing have recently focused on differences between the event-related potentials (ERPs) elicited by violations of syntactic structure (e.g. the P600) and those elicited by violations of semantic expectancy (e.g. the N400). However, the scalp distribution of the P600 component elicited by syntactic violations appears to resemble that elicited by rare categorical events (\"odd-balls\") in non-linguistic contexts, frequently termed the P3b. The relationship between the P600 and the P3b was explored by manipulating the grammaticality of sentences read for comprehension, as well as two factors known to influence P3b amplitude: odd-ball probability and event saliency. Oddball probability was manipulated by varying the frequency of morphosyntactic violations within blocks of sentences, and event saliency was manipulated by using two types of morphosyntactic violations, one of which was more striking than the other. The results indicate that the amplitude of the P600, like the P3b, was sensitive to both the probability and saliency manipulations, and that the scalp distributions for the effect of probability and grammaticality are essentially similar. An unexpected, but not wholly surprising, finding was the elicitation of an anterior negativity between 300 and 500 msec post-word onset, which may index working memory operations involved in sentence processing.",
"title": ""
},
{
"docid": "acc700d965586f5ea65bdcb67af38fca",
"text": "OBJECTIVE\nAttention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD.\n\n\nMETHODS\nSeventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea).\n\n\nRESULTS\nThere were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03).\n\n\nCONCLUSION\nWe suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.",
"title": ""
},
{
"docid": "1f8ac49b7e723a3ac45307211ce80d6e",
"text": "Morphological development, including the body proportions, fins, pigmentation and labyrinth organ, in laboratory-hatched larval and juvenile three-spot gourami Trichogaster trichopterus was described. In addition, some wild larval and juvenile specimens were observed for comparison. Body lengths of larvae and juveniles were 2.5 ± 0.1 mm just after hatching (day 0) and 9.2 ± 1.4 mm on day 22, reaching 20.4 ± 5.0 mm on day 40. Aggregate fin ray numbers attained their full complements in juveniles >11.9 mm BL. Preflexion larvae started feeding on day 3 following upper and lower jaw formation, the yolk being completely absorbed by day 11. Subsequently, oblong conical teeth appeared in postflexion larvae >6.4 mm BL (day 13). Melanophores on the body increased with growth, and a large spot started forming at the caudal margin of the body in flexion postlarvae >6.7 mm BL, followed by a second large spot positioned posteriorly on the midline in postflexion larvae >8.6 mm BL. The labyrinth organ differentiated in postflexion larvae >7.9 mm BL (day 19). For eye diameter and the first soft fin ray of pelvic fin length, the proportions in laboratory-reared specimens were smaller than those in wild specimens in 18.5–24.5 mm BL. The pigmentation pattern of laboratory-reared fish did not distinctively differ from that in the wild ones. Comparisons with larval and juvenile morphology of a congener T. pectoralis revealed several distinct differences, particularly in the numbers of myomeres, pigmentations and the proportional length of the first soft fin ray of the pelvic fin.",
"title": ""
},
{
"docid": "7a7d43299511f5852080b4a5989c4b0c",
"text": "Precision phenotyping, especially the use of image analysis, allows researchers to gain information on plant properties and plant health. Aerial image detection with unmanned aerial vehicles (UAVs) provides new opportunities in precision farming and precision phenotyping. Precision farming has created a critical need for spatial data on plant density. The plant number reflects not only the final field emergence but also allows a more precise assessment of the final yield parameters. The aim of this work is to advance UAV use and image analysis as a possible highthroughput phenotyping technique. In this study, four different maize cultivars were planted in plots with different seeding systems (in rows and equidistantly spaced) and different nitrogen fertilization levels (applied at 50, 150 and 250 kg N/ha). The experimental field, encompassing 96 plots, was overflown at a 50-m height with an octocopter equipped with a 10-megapixel camera taking a picture every 5 s. Images were recorded between BBCH 13–15 (it is a scale to identify the phenological development stage of a plant which is here the 3to 5-leaves development stage) when the color of young leaves differs from older leaves. Close correlations up to R2 = 0.89 were found between in situ and image-based counted plants adapting a decorrelation stretch contrast enhancement procedure, which enhanced color differences in the images. On average, the error between visually and digitally counted plants was ≤5%. Ground cover, as determined by analyzing green pixels, ranged between 76% and 83% at these stages. However, the correlation between ground cover and digitally counted plants was very low. The presence of weeds and blurry effects on the images represent possible errors in counting plants. In conclusion, the final field emergence of maize can rapidly be assessed and allows more precise assessment of the final yield parameters. The use of UAVs and image processing has the potential to optimize farm management and to support field experimentation for agronomic and breeding purposes.",
"title": ""
},
{
"docid": "391fb9de39cb2d0635f2329362db846e",
"text": "In recent years, there has been an explosion of interest in mining time series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.",
"title": ""
},
{
"docid": "7b0fc8edb3228db9c85f27e0180adbc2",
"text": "The notion of ransomware has actually been around for quite some time. In 1989, Dr Joseph Popp distributed a trojan called PC Cyborg in which malware would hide all folders and encrypt files on the PC’s C: drive. A script delivered a ransom message demanding that $189 be directed to the PC Cyborg Corporation. The afflicted PC wouldn’t function until the ransom was paid and the malware’s actions were reversed. Since then, numerous enhancements to this type of scheme have been made, especially in the area of stronger file encryption. Now, it’s virtually impossible for victims to decrypt their own files.",
"title": ""
},
{
"docid": "53f1dfe2fa86679a1d0da19d8671c50f",
"text": "We demonstrate 3D phase and absorption recovery from partially coherent intensity images captured with a programmable LED array source. Images are captured through-focus with four different illumination patterns. Using first Born and weak object approximations (WOA), a linear 3D differential phase contrast (DPC) model is derived. The partially coherent transfer functions relate the sample’s complex refractive index distribution to intensity measurements at varying defocus. Volumetric reconstruction is achieved by a global FFT-based method, without an intermediate 2D phase retrieval step. Because the illumination is spatially partially coherent, the transverse resolution of the reconstructed field achieves twice the NA of coherent systems and improved axial resolution. © 2016 Optical Society of America OCIS codes: (100.5070) Phase retrieval; (170.6900) Three-dimensional microscopy. References and links 1. F. Zernike, “Phase contrast, a new method for the microscopic observation of transparent objects,” Physica 9, 686–698 (1942). 2. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19, 1016–1026 (2011). 3. D. Murphy, Fundamentals of Light Microscopy and Electronic Imaging (Wiley-Liss, New York, NY, USA, 2001). 4. W. Lang, Nomarski Differential Interference-Contrast Microscopy (Oberkochen, Carl Zeiss, 1982). 5. M. R. Arnison, K. G. Larkin, C. J. R. Sheppard, N. I. Smith, and C. J. Cogswell, “Linear phase imaging using differential interference contrast microscopy,” J. Microsc. 214, 7–12 (2004). 6. D. Paganin and K. A. Nugent, “Noninterferometric phase imaging with partially coherent light,” Phys. Rev. Lett. 80, 2586–2589 (1998). 7. C. J. R. Sheppard, “Defocused transfer function for a partially coherent microscope and application to phase retrieval,” J. Opt. Soc. Am. A 21, 828–831 (2004). 8. J. C. Petruccelli, L. Tian, and G. Barbastathis, “The transport of intensity equation for optical path length recovery using partially coherent illumination,” Opt. Express 21, 14430–14441 (2013). 9. J. Zhong, L. Tian, J. Dauwels, and L. Waller, “Partially coherent phase imaging with simultaneous source recovery,” Opt. Express 6, 257–265 (2015). 10. J. A. Rodrigo and T. Alieva, “Rapid quantitative phase imaging for partially coherent light microscopy,” Opt. Express 22, 13472–13483 (2014). 11. P. Cloetens, W. Ludwig, J. Baruchel, D. V. Dyck, J. V. Landuyt, J. P. Guigay, and M. Schlenker, “Holotomography: Quantitative phase tomography with micrometer resolution using hard synchrotron radiation X rays,” Appl. Phys. Lett. 75, 2912–2914 (1999). 12. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). 13. Y. Cotte, F. Toy, P. Jourdain, N. Pavillon, D. Boss, P. Magistretti, P. Marquet, and C. Depeursinge, “Marker-free phase nanoscopy,” Nature Photon. 7, 113–117 (2013). 14. D. Brady, K. Choi, D. Marks, and R. Horisaki, “Compressive holography,” Opt. Express 17, 13040–13049 (2009). 15. Y. Sung and R. R. Dasari, “Deterministic regularization of three-dimensional optical diffraction tomography,” J. Opt. Soc. Am. A 28, 1554–1561 (2011). 16. A. Bronnikov, “Theory of quantitative phase-contrast computed tomography,” J. Opt. Soc. Am. A 19, 472–480 (2002). 17. L. Tian, J. C. Petruccelli, Q. Miao, H. Kudrolli, V. Nagarkar, and G. Barbastathis, “Compressive X-ray phase tomography based on the transport of intensity equation,” Opt. Lett. 38, 3418–3421 (2013). 18. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969). 19. G. Gbur, M. A. Anastasio, Y. Huang, and D. Shi, “Spherical-wave intensity diffraction tomography,” J. Opt. Soc. Am. A 22, 230–238 (2005). 20. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). 21. T. Kim, R. Zhou, M. Mir, S. D. Babacan, P. S. Carney, L. L. Goddard, and G. Popescu, “White-light diffraction tomography of unlabelled live cells,” Nature Photon. 8, 256–263 (2014). 22. D. Hamilton and C. Sheppard, “Differential phase contrast in scanning optical microscopy,” J. Microsc. 133, 27–39 (1984). 23. T. N. Ford, K. K. Chu, and J. Mertz, “Phase-gradient microscopy in thick tissue with oblique back-illumination,” Nat. Methods 9, 1195–1197 (2012). 24. S. B. Mehta and C. J. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett. 34, 1924–1926 (2009). 25. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23, 11394–11403 (2015). 26. G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and dark-field imaging by using a simple LED array,” Opt. Lett. 36, 3987–3989 (2011). 27. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nature Photon. 7, 739–745 (2013). 28. Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a lightemitting diode array microscope,” J. Biomed. Opt. 19, 106002–106002 (2014). 29. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015). 30. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). 31. L. Tian, J. Wang, and L. Waller, “3D differential phase-contrast microscopy with computational illumination using an LED array,” Opt. Lett. 39, 1326–1329 (2014). 32. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22, 4960–4972 (2014). 33. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Cambridge University Press, 1999), 7th ed. 34. N. Streibl, “Three-dimensional imaging by a microscope,” J. Opt. Soc. Am. A 2, 121–127 (1985). 35. Y. Sung and C. J. R. Sheppard, “Three-dimensional imaging by partially coherent light under non-paraxial condition,” J. Opt. Soc. Am. A 28, 554–559 (2011). 36. T. H. Nguyen, C. Edwards, L. L. Goddard, and G. Popescu, “Quantitative phase imaging of weakly scattering objects using partially coherent illumination,” Opt. Express 24, 11683–11693 (2016). 37. R. A. Claus, P. P. Naulleau, A. R. Neureuther, and L. Waller, “Quantitative phase retrieval with arbitrary pupil and illumination,” Opt. Express 23, 26672–26682 (2015). 38. M. H. Jenkins and T. K. Gaylord, “Three-dimensional quantitative phase imaging via tomographic deconvolution phase microscopy,” Appl. Opt. 54, 9213–9227 (2015). 39. J. Guigay, “Fourier transform analysis of Fresnel diffraction patterns and in-line holograms,” Optik 49, 121–125 (1977). 40. Y. I. Nesterests and T. E. Gureyev, “Partially coherent contrast-transfer-function approximation,” J. Opt. Soc. Am. A 33, 464–474 (2016). 41. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2, 517–522 (2015). 42. T. L. Jensen, J. H. Joergensen, P. C. Hansen, and S. H. Jensen, “Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization,” BIT Numer. Math. 52, 329–356 (2012). http: //www.imm.dtu.dk/ ̃pcha/TVReg/ 43. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007). 44. S. Jones, M. King, and A. Ward, “Determining the unique refractive index properties of solid polystyrene aerosol using broadband Mie scattering from optically trapped beads,” Phys. Chem. 15, 20735–20741 (2013). 45. Z. Jingshan, R. A. Claus, J. Dauwels, L. Tian, and L. Waller, “Transport of intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes,” Opt. Express 22, 10661–10674 (2014).",
"title": ""
},
{
"docid": "1aa89c7b8be417345d78d1657d5f487f",
"text": "This paper proposes a new novel snubberless current-fed half-bridge front-end isolated dc/dc converter-based inverter for photovoltaic applications. It is suitable for grid-tied (utility interface) as well as off-grid (standalone) application based on the mode of control. The proposed converter attains clamping of the device voltage by secondary modulation, thus eliminating the need of snubber or active-clamp. Zero-current switching or natural commutation of primary devices and zero-voltage switching of secondary devices is achieved. Soft-switching is inherent owing to the proposed secondary modulation and is maintained during wide variation in voltage and power transfer capacity and thus is suitable for photovoltaic (PV) applications. Primary device voltage is clamped at reflected output voltage, and secondary device voltage is clamped at output voltage. Steady-state operation and analysis, and design procedure are presented. Simulation results using PSIM 9.0 are given to verify the proposed analysis and design. An experimental converter prototype rated at 200 W has been designed, built, and tested in the laboratory to verify and demonstrate the converter performance over wide variations in input voltage and output power for PV applications. The proposed converter is a true isolated boost converter and has higher voltage conversion (boost) ratio compared to the conventional active-clamped converter.",
"title": ""
},
{
"docid": "0f24b6c36586505c1f4cc001e3ddff13",
"text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.",
"title": ""
}
] |
scidocsrr
|
aafe3ac618c74b449ea74d5e19fd10b9
|
Multiple-shot human re-identification by Mean Riemannian Covariance Grid
|
[
{
"docid": "9f635d570b827d68e057afcaadca791c",
"text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.",
"title": ""
},
{
"docid": "e5d523d8a1f584421dab2eeb269cd303",
"text": "In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.",
"title": ""
},
{
"docid": "fbc47f2d625755bda6d9aa37805b69f1",
"text": "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.",
"title": ""
}
] |
[
{
"docid": "5d91c93728632586a63634c941420c64",
"text": "A new method for analyzing analog single-event transient (ASET) data has been developed. The approach allows for quantitative error calculations, given device failure thresholds. The method is described and employed in the analysis of an OP-27 op-amp.",
"title": ""
},
{
"docid": "67db2885a2b8780cbfd19c1ff0cfba36",
"text": "Mechanocomputational techniques in conjunction with artificial intelligence (AI) are revolutionizing the interpretations of the crucial information from the medical data and converting it into optimized and organized information for diagnostics. It is possible due to valuable perfection in artificial intelligence, computer aided diagnostics, virtual assistant, robotic surgery, augmented reality and genome editing (based on AI) technologies. Such techniques are serving as the products for diagnosing emerging microbial or non microbial diseases. This article represents a combinatory approach of using such approaches and providing therapeutic solutions towards utilizing these techniques in disease diagnostics.",
"title": ""
},
{
"docid": "5c74348ce0028786990b4ca39b1e858d",
"text": "The terminology Internet of Things (IoT) refers to a future where every day physical objects are connected by the Internet in one form or the other, but outside the traditional desktop realm. The successful emergence of the IoT vision, however, will require computing to extend past traditional scenarios involving portables and smart-phones to the connection of everyday physical objects and the integration of intelligence with the environment. Subsequently, this will lead to the development of new computing features and challenges. The main purpose of this paper, therefore, is to investigate the features, challenges, and weaknesses that will come about, as the IoT becomes reality with the connection of more and more physical objects. Specifically, the study seeks to assess emergent challenges due to denial of service attacks, eavesdropping, node capture in the IoT infrastructure, and physical security of the sensors. We conducted a literature review about IoT, their features, challenges, and vulnerabilities. The methodology paradigm used was qualitative in nature with an exploratory research design, while data was collected using the desk research method. We found that, in the distributed form of architecture in IoT, attackers could hijack unsecured network devices converting them into bots to attack third parties. Moreover, attackers could target communication channels and extract data from the information flow. Finally, the perceptual layer in distributed IoT architecture is also found to be vulnerable to node capture attacks, including physical capture, brute force attack, DDoS attacks, and node privacy leaks.",
"title": ""
},
{
"docid": "215bb5273dbf5c301ae4170b5da39a34",
"text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.",
"title": ""
},
{
"docid": "58f505558cda55abf70b143d52030a2d",
"text": "Given a finite set of points P ⊆ R, we would like to find a small subset S ⊆ P such that the convex hull of S approximately contains P . More formally, every point in P is within distance from the convex hull of S. Such a subset S is called an -hull. Computing an -hull is an important problem in computational geometry, machine learning, and approximation algorithms. In many applications, the set P is too large to fit in memory. We consider the streaming model where the algorithm receives the points of P sequentially and strives to use a minimal amount of memory. Existing streaming algorithms for computing an -hull require O( (1−d)/2) space, which is optimal for a worst-case input. However, this ignores the structure of the data. The minimal size of an -hull of P , which we denote by OPT, can be much smaller. A natural question is whether a streaming algorithm can compute an -hull using only O(OPT) space. We begin with lower bounds that show, under a reasonable streaming model, that it is not possible to have a single-pass streaming algorithm that computes an -hull with O(OPT) space. We instead propose three relaxations of the problem for which we can compute -hulls using space near-linear to the optimal size. Our first algorithm for points in R2 that arrive in random-order uses O(logn ·OPT) space. Our second algorithm for points in R2 makes O(log( −1)) passes before outputting the -hull and requires O(OPT) space. Our third algorithm, for points in R for any fixed dimension d, outputs, with high probability, an -hull for all but δ-fraction of directions and requires O(OPT · log OPT) space. 1 This work was supported in part by the National Science Foundation under grant CCF-1525971. Work was done while the author was at Carnegie Mellon University. 2 This material is based upon work supported in part by the National Science Foundation under Grants No. 1447639, 1650041 and 1652257, Cisco faculty award, and by the ONR Award N00014-18-1-2364. 3 Now at DeepMind. 4 This research was supported by the Franco-American Fulbright Commission and supported in part by National Science Foundation under Grant No. 1447639, 1650041 and 1652257. The author thanks INRIA (l’Institut national de recherche en informatique et en automatique) for hosting him during the writing of this paper. 5 This material is based upon work supported in part by National Science Foundation under Grant No. 1447639, 1650041 and 1652257. Work was done while the author was at Johns Hopkins University. EA T C S © Avrim Blum, Vladimir Braverman, Ananya Kumar, Harry Lang, and Lin F. Yang; licensed under Creative Commons License CC-BY 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Editors: Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella; Article No. 21; pp. 21:1–21:13 Leibniz International Proceedings in Informatics Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 21:2 Approximate Convex Hull of Data Streams 2012 ACM Subject Classification Theory of computation → Computational geometry, Theory of computation → Sketching and sampling, Theory of computation → Streaming models",
"title": ""
},
{
"docid": "3a84567c28d6a59271334594307263a5",
"text": "Comprehension difficulty was rated for metaphors of the form Noun1-is-aNoun2; in addition, participants completed frames of the form Noun1-is-________ with their literal interpretation of the metaphor. Metaphor comprehension was simulated with a computational model based on Latent Semantic Analysis. The model matched participants’ interpretations for both easy and difficult metaphors. When interpreting easy metaphors, both the participants and the model generated highly consistent responses. When interpreting difficult metaphors, both the participants and the model generated disparate responses.",
"title": ""
},
{
"docid": "a297eea91a94a2945f6860b405205681",
"text": "AIM\nThe aim of this study was to determine the treatment outcome of the use of a porcine monolayer collagen matrix (mCM) to augment peri-implant soft tissue in conjunction with immediate implant placement as an alternative to patient's own connective tissue.\n\n\nMATERIALS AND METHODS\nA total of 27 implants were placed immediately in 27 patients (14 males and 13 females, with a mean age of 52.2 years) with simultaneous augmentation of the soft tissue by the use of a mCM. The patients were randomly divided into two groups: Group I: An envelope flap was created and mCM was left coronally uncovered, and group II: A coronally repositioned flap was created and the mCM was covered by the mucosa. Soft-tissue thickness (STTh) was measured at the time of surgery (T0) and 6 months postoperatively (T1) using a customized stent. Cone beam computed tomographies (CBCTs) were taken from 12 representative cases at T1. A stringent plaque control regimen was enforced in all the patients during the 6-month observation period.\n\n\nRESULTS\nMean STTh change was similar in both groups (0.7 ± 0.2 and 0.7 ± 0.1 mm in groups I and II respectively). The comparison of STTh between T0 and T1 showed a statistically significant increase of soft tissue in both groups I and II as well as in the total examined population (p < 0.001). The STTh change as well as matrix thickness loss were comparable in both groups (p > 0.05). The evaluation of the CBCTs did not show any signs of resorption of the buccal bone plate.\n\n\nCONCLUSION\nWithin the limitations of this study, it could be concluded that the collagen matrix used in conjunction with immediate implant placement leads to an increased thickness of peri-implant soft tissue independent of the flap creation technique and could be an alternative to connective tissue graft.\n\n\nCLINICAL SIGNIFICANCE\nThe collagen matrix used seems to be a good alternative to patient's own connective tissue and could be used for the soft tissue augmentation around dental implants.",
"title": ""
},
{
"docid": "a252ec33139d9489133b91c2551a694f",
"text": "The lucrative rewards of security penetrations into large organizations have motivated the development and use of many sophisticated rootkit techniques to maintain an attacker's presence on a compromised system. Due to the evasive nature of such infections, detecting these rootkit infestations is a problem facing modern organizations. While many approaches to this problem have been proposed, various drawbacks that range from signature generation issues, to coverage, to performance, prevent these approaches from being ideal solutions.\n In this paper, we present Blacksheep, a distributed system for detecting a rootkit infestation among groups of similar machines. This approach was motivated by the homogenous natures of many corporate networks. Taking advantage of the similarity amongst the machines that it analyses, Blacksheep is able to efficiently and effectively detect both existing and new infestations by comparing the memory dumps collected from each host.\n We evaluate Blacksheep on two sets of memory dumps. One set is taken from virtual machines using virtual machine introspection, mimicking the deployment of Blacksheep on a cloud computing provider's network. The other set is taken from Windows XP machines via a memory acquisition driver, demonstrating Blacksheep's usage under more challenging image acquisition conditions. The results of the evaluation show that by leveraging the homogeneous nature of groups of computers, it is possible to detect rootkit infestations.",
"title": ""
},
{
"docid": "8e794530be184686a49e5ced6ac6521d",
"text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.",
"title": ""
},
{
"docid": "f2ce432386b8f407c416ea3d95d58427",
"text": "The use of Computer Aided Design (CAD) in forensic science is not new. However CAD did not become a (quality) standard for crime scene sketching. If the crime scene sketch is an effective way to present measurements, it must respond to accuracy criteria to supplement the documentary work by note taking and crime scene photography. The forensic photography unit of the Zürich Police changed already some years ago from hand drawn crime scene sketches to CAD sketches. Meanwhile the technique is used regularly for all major crime scene work. Using the Rolleimetric MR-2 single-camera measuring system combined with commercial CAD-software, crime scene sketches of a high quality standard are obtained.",
"title": ""
},
{
"docid": "48d778934127343947b494fe51f56a33",
"text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.",
"title": ""
},
{
"docid": "f06552ac766cb1b0d0e00ec2e47882f4",
"text": "Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence. Nonetheless, popular tasks such as speech or images recognition, involve multi-dimensional input features that are characterized by strong internal dependencies between the dimensions of the input vector. We propose a novel quaternion recurrent neural network (QRNN), alongside with a quaternion long-short term memory neural network (QLSTM), that take into account both the external relations and these internal structural dependencies with the quaternion algebra. Similarly to capsules, quaternions allow the QRNN to code internal dependencies by composing and processing multidimensional features as single entities, while the recurrent operation reveals correlations between the elements composing the sequence. We show that both QRNN and QLSTM achieve better performances than RNN and LSTM in a realistic application of automatic speech recognition. Finally, we show that QRNN and QLSTM reduce by a maximum factor of 3.3x the number of free parameters needed, compared to real-valued RNNs and LSTMs to reach better results, leading to a more compact representation of the relevant information.",
"title": ""
},
{
"docid": "0ac9ad839f21bd03342dd786b09155fe",
"text": "Graphs are fundamental data structures which concisely capture the relational structure in many important real-world domains, such as knowledge graphs, physical and social interactions, language, and chemistry. Here we introduce a powerful new approach for learning generative models over graphs, which can capture both their structure and attributes. Our approach uses graph neural networks to express probabilistic dependencies among a graph’s nodes and edges, and can, in principle, learn distributions over any arbitrary graph. In a series of experiments our results show that once trained, our models can generate good quality samples of both synthetic graphs as well as real molecular graphs, both unconditionally and conditioned on data. Compared to baselines that do not use graph-structured representations, our models often perform far better. We also explore key challenges of learning generative models of graphs, such as how to handle symmetries and ordering of elements during the graph generation process, and offer possible solutions. Our work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vectorand sequence-like knowledge representations, toward more expressive and flexible relational data structures.",
"title": ""
},
{
"docid": "33fe68214ea062f2cdb310a74a9d6d8b",
"text": "In this study, the authors examine the relationship between abusive supervision and employee workplace deviance. The authors conceptualize abusive supervision as a type of aggression. They use work on retaliation and direct and displaced aggression as a foundation for examining employees' reactions to abusive supervision. The authors predict abusive supervision will be related to supervisor-directed deviance, organizational deviance, and interpersonal deviance. Additionally, the authors examine the moderating effects of negative reciprocity beliefs. They hypothesized that the relationship between abusive supervision and supervisor-directed deviance would be stronger when individuals hold higher negative reciprocity beliefs. The results support this hypothesis. The implications of the results for understanding destructive behaviors in the workplace are examined.",
"title": ""
},
{
"docid": "44a86bb41e58da96d72efc1544e3b420",
"text": "The front-end hardware complexity of a coherent array imaging system scales with the number of active array elements that are simultaneously used for transmission or reception of signals. Different imaging methods use different numbers of active channels and data collection strategies. Conventional full phased array (FPA) imaging produces the best image quality using all elements for both transmission and reception, and it has high front-end hardware complexity. In contrast, classical synthetic aperture (CSA) imaging only transmits on and receives from a single element at a time, minimizing the hardware complexity but achieving poor image quality. We propose a new coherent array imaging method - phased subarray (PSA) imaging - that performs partial transmit and receive beam-forming using a subset of adjacent elements at each firing step. This method reduces the number of active channels to the number of subarray elements; these channels are multiplexed across the full array and a reduced number of beams are acquired from each subarray. The low-resolution subarray images are laterally upsampled, interpolated, weighted, and coherently summed to form the final high-resolution PSA image. The PSA imaging reduces the complexity of the front-end hardware while achieving image quality approaching that of FPA imaging",
"title": ""
},
{
"docid": "2c9e37c4db08d0be063f778e50017469",
"text": "Poor children confront widespread environmental inequities. Compared with their economically advantaged counterparts, they are exposed to more family turmoil, violence, separation from their families, instability, and chaotic households. Poor children experience less social support, and their parents are less responsive and more authoritarian. Low-income children are read to relatively infrequently, watch more TV, and have less access to books and computers. Low-income parents are less involved in their children's school activities. The air and water poor children consume are more polluted. Their homes are more crowded, noisier, and of lower quality. Low-income neighborhoods are more dangerous, offer poorer municipal services, and suffer greater physical deterioration. Predominantly low-income schools and day care are inferior. The accumulation of multiple environmental risks rather than singular risk exposure may be an especially pathogenic aspect of childhood poverty.",
"title": ""
},
{
"docid": "52126cacd2e36ed14e45908f2ddc8530",
"text": "In this paper, we propose a new refinement filter for depth maps. The filter convolutes a depth map by a jointly computed kernel on a natural image with a weight map. We call the filter weighted joint bilateral filter. The filter fits an outline of an object in the depth map to the outline of the object in the natural image, and it reduces noises. An additional filter of slope depth compensation filter removes blur across object boundary. The filter set’s computational cost is low and is independent of depth ranges. Thus we can refine depth maps to generate accurate depth map with lower cost. In addition, we can apply the filters for various types of depth map, such as computed by simple block matching, Markov random field based optimization, and Depth sensors. Experimental results show that the proposed filter has the best performance of improvement of depth map accuracy, and the proposed filter can perform real-time refinement.",
"title": ""
},
{
"docid": "1bd1d77e40e537ab77fa08a653ef1905",
"text": "Chip mounter is a machine in SMT Line that has the task of picking and placing SMD components onto a PCB that has been coated with solder paste. To be able to place SMD components accurately and rapidly onto PCB, it is necessary to have a desciption on the location the chip to be placed on the PCB, the current position of the chip, and the current position of the chip footprint on the board. This paper describes the steps that have been designed and implemented to meet the 3 requirements mentioned by using a digital microscope camera as a downward vision and the use of image processing algorithms as the computer vision feature of the chip-mounter. In the image processing algorithms used here, color thresholding in HSV color model for object selection is incorporated. In order to sharpen the result of object selection, morphological opening and closing are invoked. Canny operator is used to get the edge of the object. Upon contouring the object, the centroid location of the selected object can subsequently be calculated. As shown during the test, the computer vision method implemented in this work is capable of producing data required by the chip-mounter to do position correction if there is non-uniformity errors of the pattern on the PCB panel.",
"title": ""
},
{
"docid": "bfba5508c4b50ef5a681070202482ad5",
"text": "Industry 4.0 is an important trend in factory automation nowadays. Among the Automated-Storage-and-Retrieval-System (ASRS) is one of the most important issues for industry. It is widely used in a variety of industries for a variety of storage applications in factories and warehouses. However, the cost of constructing an ASRS is so high that most small/medium enterprises cannot afford it. A forklift system is a cheaper alternative to a complicated ASRS. In this work, a new pallet detection method that uses an Adaptive Structure Feature (ASF) and Direction Weighted Overlapping (DWO) ratio to allow forklifts to pick up a pallet is proposed, using a monocular vision system on the forklift. Combining the ASF and DWO ratio for pallet detection, the proposed method removes most of the non-stationary (dynamic) background and significantly increases the processing efficiency. A Haar like-based Adaboost scheme uses an AS for pallets algorithm to detect pallets. It detects the pallet in a dark environment. Finally, by calculating the DWO ratio between the detected pallets and tracking records, it avoids erroneous candidates during object tracking. Therefore, this work improves the pallet detection to solve the problem with an effective design. As results show that the hybrid algorithms that are proposed in this work increase the average pallet detection rate by 95 %.",
"title": ""
}
] |
scidocsrr
|
304675278baae60d59233afbc769cad1
|
Wideband probe-type waveguide-to-microstrip transition for V-band applications
|
[
{
"docid": "3d4b6adc731e9eea3ce7a2a0839f692d",
"text": "In this paper design of a wideband low loss tapered antipodal fin-line waveguide-to-microstrip transition operating in the 63-90 GHz frequency band is presented. The transition structure consists of a metal body containing a standard WR12 waveguide and a printed circuit board (PCB) with tapered antipodal fin-line fabricated using standard high frequency PCB technology Rogers RO4003C. The transition is realized by clamping the PCB between two halves of the metal body. All parameters of the designed transition are optimized for superior performance in the 71-86 GHz band that is dedicated for backhaul systems of mobile networks. Measurement results of the designed waveguide-to-microstrip transition show that it provides a large transmission bandwidth of 63-90 GHz (> 27 GHz) for the 10 dB level of return loss. The measured average insertion loss is about 0.6 dB in the 71-86 GHz frequency band and is lower than 1.1 dB in all the E-band from 60 GHz to 90 GHz. Hence, the achieved characteristics are sufficient for use the designed transition in various E-band applications such as WLAN/WPAN communications, 71-76/81-86 GHz wireless backhaul systems, and 77 GHz automotive radars.",
"title": ""
},
{
"docid": "172216abbcb7acb25d5cdb8d65c2becf",
"text": "In this paper, design of a planar wideband waveguide to microstrip transition for the 60 GHz frequency band is presented. The designed transition is fabricated using standard high frequency multilayer printed circuit board technology RO4003C. The waveguide to microstrip transition provides low production cost and allows for simple integration of the WR-15 rectangular waveguide without any modifications in the waveguide structure. Results of electromagnetic simulation and experimental investigation of the designed waveguide to microstrip transition are presented. The transmission bandwidth of the transition is equal to the full bandwidth of the WR-15 waveguide (50–75 GHz) for the −3 dB level of the insertion loss that was achieved by special modifications in the general aperture coupled transition structure. The transition loss is lower than 1 dB at the central frequency of 60 GHz.",
"title": ""
},
{
"docid": "5ae4b1d4ef00afbde49edfaa2728934b",
"text": "A wideband, low loss inline transition from microstrip line to rectangular waveguide is presented. This transition efficiently couples energy from a microstrip line to a ridge and subsequently to a TE10 waveguide. This unique structure requires no mechanical pressure for electrical contact between the microstrip probe and the ridge because the main planar circuitry and ridge sections are placed on a single housing. The measured insertion loss for back-to-back transition is 0.5 – 0.7 dB (0.25 – 0.35 dB/transition) in the band 50 – 72 GHz.",
"title": ""
},
{
"docid": "aae42c27d72e35573179bff9b2c31a1b",
"text": "This paper propose a novel millimeter-wave microstrip (MSL) to WR 12 standard rectangular waveguide transition. These dissimilar structures are interconnected via a trapezoidal substrate integrated waveguide (TSIW). The MSL and the TSIW are integrated on the same ceramic substrate of 9.9 of permittivity and 125 of thickness. The MSL has been first transformed into an air-filled rectangular waveguide, and then a horn transformer has been used to match the WR12 waveguide. The central frequency of operation is 61 GHz and dedicated to V-band wireless communications applications. The S parameters measurements of the back-to-back connected transitions show an insertion loss less than 1.5 dB and a return loss better than -10 dB over a 5 GHz bandwidth from 60 to 65 GHz. The high performance and the compact size, enable the transition to be employed in a number of millimeter-wave applications.",
"title": ""
}
] |
[
{
"docid": "6315288620132b456feeb78f36362ca7",
"text": "Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan. © 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "2a388893c88a9cdf44ed5ace584fbad7",
"text": "Bayesian network (BN) classifiers with powerful reasoning capabilities have been increasingly utilized to detect intrusion with reasonable accuracy and efficiency. However, existing BN classifiers for intrusion detection suffer two problems. First, such BN classifiers are often trained from data using heuristic methods that usually select suboptimal models. Second, the classifiers are trained using very large datasets which may be time consuming to obtain in practice. When the size of training dataset is small, the performance of a single BN classifier is significantly reduced due to its inability to represent the whole probability distribution. To alleviate these problems, we build a Bayesian classifier by Bayesian Model Averaging(BMA) over the k-best BN classifiers, called Bayesian Network Model Averaging (BNMA) classifier. We train and evaluate BNMA classifier on the NSL-KDD dataset, which is less redundant, thus more judicial than the commonly used KDD Cup 99 dataset. We show that the BNMA classifier performs significantly better in terms of detection accuracy than the Naive Bayes classifier and the BN classifier built with heuristic method. We also show that the BNMA classifier trained using a smaller dataset outperforms two other classifiers trained using a larger dataset. This also implies that the BNMA is beneficial in accelerating the detection process due to its less dependance on the potentially prolonged process of collecting large training datasets.",
"title": ""
},
{
"docid": "b84e816e6c8b8777d67d67dc76f73e2b",
"text": "An increasing fraction of today's social interactions occur using online social media as communication channels. Recent worldwide events, such as social movements in Spain or revolts in the Middle East, highlight their capacity to boost people's coordination. Online networks display in general a rich internal structure where users can choose among different types and intensity of interactions. Despite this, there are still open questions regarding the social value of online interactions. For example, the existence of users with millions of online friends sheds doubts on the relevance of these relations. In this work, we focus on Twitter, one of the most popular online social networks, and find that the network formed by the basic type of connections is organized in groups. The activity of the users conforms to the landscape determined by such groups. Furthermore, Twitter's distinction between different types of interactions allows us to establish a parallelism between online and offline social networks: personal interactions are more likely to occur on internal links to the groups (the weakness of strong ties); events transmitting new information go preferentially through links connecting different groups (the strength of weak ties) or even more through links connecting to users belonging to several groups that act as brokers (the strength of intermediary ties).",
"title": ""
},
{
"docid": "c4c3a0bccbf4e093750e1ef356d2f09c",
"text": "We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN. This memory-enhanced RNN decoder is called MEMDEC. At each time during decoding, MEMDEC will read from this memory and write to this memory once, both with content-based addressing. Unlike the unbounded memory in previous work(Bahdanau et al., 2014) to store the representation of source sentence, the memory in MEMDEC is a matrix with predetermined size designed to better capture the information important for the decoding process at each time step. Our empirical study on Chinese-English translation shows that it can improve by 4.8 BLEU upon Groundhog and 5.3 BLEU upon on Moses, yielding the best performance achieved with the same training set.",
"title": ""
},
{
"docid": "8972e89b0b06bf25e72f8cb82b6d629a",
"text": "Community detection is an important task for mining the structure and function of complex networks. Generally, there are several different kinds of nodes in a network which are cluster nodes densely connected within communities, as well as some special nodes like hubs bridging multiple communities and outliers marginally connected with a community. In addition, it has been shown that there is a hierarchical structure in complex networks with communities embedded within other communities. Therefore, a good algorithm is desirable to be able to not only detect hierarchical communities, but also identify hubs and outliers. In this paper, we propose a parameter-free hierarchical network clustering algorithm SHRINK by combining the advantages of density-based clustering and modularity optimization methods. Based on the structural connectivity information, the proposed algorithm can effectively reveal the embedded hierarchical community structure with multiresolution in large-scale weighted undirected networks, and identify hubs and outliers as well. Moreover, it overcomes the sensitive threshold problem of density-based clustering algorithms and the resolution limit possessed by other modularity-based methods. To illustrate our methodology, we conduct experiments with both real-world and synthetic datasets for community detection, and compare with many other baseline methods. Experimental results demonstrate that SHRINK achieves the best performance with consistent improvements.",
"title": ""
},
{
"docid": "63a86777843c6043dd07a3cb0441b58b",
"text": "Tourism industry has become one of the most profitable industries in the world. Customer satisfaction has been identified as a key performance indicator in hotel industry. This study focused on customer satisfaction of a five star hotel in Kandy district. Servqual model was employed in the assessment of customer satisfaction of the hotel. The overall objective of this study was to examine the level of customer satisfaction and major factors contributing to customer satisfaction in a five star hotel. The data were collected using a questionnaire containing 49 questions based on 22 variables of the five dimensions of Tangibility, Reliability, Responsiveness, Assurance and Empathy. Sixty residential customers of the hotel were randomly selected. Focus group discussions and a perception survey among hotel staff were also conducted to enrich the findings. Data were analyzed using descriptive statistics, MINITAB Version 14 with Two Sample Ttest. Majority of the customers expressed their satisfaction with the overall service they received from the hotel, especially regarding Tangibility, Responsiveness and Assurance. Findings revealed that the hotel had not fulfilled the customers’ satisfaction with regard to Reliability and Empathy. It was note-worthy that a minority of customers felt overall dissatisfied with the service of the hotel. Customers seemed to have perceived the same service differently. Customers’ expectations had been influenced by their knowledge about general standards of hotel practices.",
"title": ""
},
{
"docid": "37a838344c441bcb8bc1c1f233b2f0e7",
"text": "Cloud computing platforms enable applications to offer low latency access to user data by offering storage services in several geographically distributed data centers. In this paper, we identify the high tail latency problem in cloud CDN via analyzing a large-scale dataset collected from 783,944 users in a major cloud CDN. We find that the data downloading latency in cloud CDN is highly variable, which may significantly degrade the user experience of applications. To address the problem, we present TailCutter, a workload scheduling mechanism that aims at optimizing the tail latency while meeting the cost constraint given by application providers. We further design the Maximum Tail Minimization Algorithm (MTMA) working in TailCutter mechanism to optimally solve the Tail Latency Minimization (TLM) problem in polynomial time. We implement TailCutter across data centers of Amazon S3 and Microsoft Azure. Our extensive evaluation using large-scale real world data traces shows that TailCutter can reduce up to 68% 99th percentile user-perceived latency in comparison with alternative solutions under cost constraints.",
"title": ""
},
{
"docid": "4f490f8994a6207da520152cc976135d",
"text": "Fifty three children were referred following community needlestick injuries, August 1995 to September 2003. Twenty five attended for serology six months later. None were positive for HIV, or hepatitis B or C. Routine follow up after community needlestick injury is unnecessary. HIV post-exposure prophylaxis should only be considered in high risk children.",
"title": ""
},
{
"docid": "94bd0b242079d2b82c141e9f117154f7",
"text": "BACKGROUND\nNewborns with critical health conditions are monitored in neonatal intensive care units (NICU). In NICU, one of the most important problems that they face is the risk of brain injury. There is a need for continuous monitoring of newborn's brain function to prevent any potential brain injury. This type of monitoring should not interfere with intensive care of the newborn. Therefore, it should be non-invasive and portable.\n\n\nMETHODS\nIn this paper, a low-cost, battery operated, dual wavelength, continuous wave near infrared spectroscopy system for continuous bedside hemodynamic monitoring of neonatal brain is presented. The system has been designed to optimize SNR by optimizing the wavelength-multiplexing parameters with special emphasis on safety issues concerning burn injuries. SNR improvement by utilizing the entire dynamic range has been satisfied with modifications in analog circuitry.\n\n\nRESULTS AND CONCLUSION\nAs a result, a shot-limited SNR of 67 dB has been achieved for 10 Hz temporal resolution. The system can operate more than 30 hours without recharging when an off-the-shelf 1850 mAh-7.2 V battery is used. Laboratory tests with optical phantoms and preliminary data recorded in NICU demonstrate the potential of the system as a reliable clinical tool to be employed in the bedside regional monitoring of newborn brain metabolism under intensive care.",
"title": ""
},
{
"docid": "8a523668c8549db8aeb5a412f979a7de",
"text": "The avalanche effect is an important performance that any block cipher must have. With the AES algorithm program and experiments, we fully test and research the avalanche effect performance of the AES algorithm, and give the changed cipher-bit numbers when respectively changing every bit of the plaintext and key in turn. The test results show that the AES algorithm has very good avalanche effect Performance indeed.",
"title": ""
},
{
"docid": "bc6a6cf11881326360387cbed997dcf1",
"text": "The explanation of heterogeneous multivariate time series data is a central problem in many applications. The problem requires two major data mining challenges to be addressed simultaneously: Learning models that are humaninterpretable and mining of heterogeneous multivariate time series data. The intersection of these two areas is not adequately explored in the existing literature. To address this gap, we propose grammar-based decision trees and an algorithm for learning them. Grammar-based decision tree extends decision trees with a grammar framework. Logical expressions, derived from context-free grammar, are used for branching in place of simple thresholds on attributes. The added expressivity enables support for a wide range of data types while retaining the interpretability of decision trees. By choosing a grammar based on temporal logic, we show that grammar-based decision trees can be used for the interpretable classification of high-dimensional and heterogeneous time series data. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to analyze the classic Australian Sign Language dataset as well as categorize and explain near midair collisions to support the development of a prototype aircraft collision avoidance system.",
"title": ""
},
{
"docid": "d3e83dbb08c64c67a2b26a4643cfb234",
"text": "This paper presents a control method for efficiency improvement of the LLC resonant converter operating with a wide input-voltage and/or output-voltage range by means of topology morphing, i.e., changing of power converter's topology to that which is the most optimal for given input-voltage and/or output-voltage conditions. The proposed on-the-fly topology-morphing control maintains a tight regulation of the output during the topology transitions so that topology transitions are made without noticeable output-voltage transients. The performance of the proposed topology morphing method is verified experimentally on an 800-W LLC dc/dc converter prototype designed for a 100-V to 400-V input-voltage range.",
"title": ""
},
{
"docid": "7df56d787a4eb94829b011e2cb65580b",
"text": "With the wide deployment of cloud computing in many business enterprises as well as science and engineering domains, high quality security services are increasingly critical for processing workflow applications with sensitive intermediate data. Unfortunately, most existing worklfow scheduling approaches disregard the security requirements of the intermediate data produced by workflows, and overlook the performance impact of encryption time of intermediate data on the start of subsequent workflow tasks. Furthermore, the idle time slots on resources, resulting from data dependencies among workflow tasks, have not been adequately exploited to mitigate the impact of data encryption time on workflows’ makespans and monetary cost. To address these issues, this paper presents a novel task-scheduling framework for security sensitive workflows with three novel features. First, we provide comprehensive theoretical analyses on how selectively duplicating a task’s predecessor tasks is helpful for preventing both the data transmission time and encryption time from delaying task’s start time. Then, we define workflow tasks’ latest finish time, and prove that tasks can be completed before tasks’ latest finish time by using cheapest resources to reduce monetary cost without delaying tasks’ successors’ start time and workflows’ makespans. Based on these analyses, we devise a novel scheduling appro ach with selective tasks duplication, named SOLID, incorporating two important phases: 1) task scheduling with selectively duplicating predecessor tasks to idle time slots on resources; and 2) intermediate data encrypting by effectively exploiting tasks’ laxity time. We evaluate our solution approach through rigorous performance evaluation study using both randomly generated workflows and some real-world workflow traces. Our results show that the proposed SOLID approach prevails over existing algorithms in terms of makespan, monetary costs and resource efficiency.",
"title": ""
},
{
"docid": "5d3f5e6c52b3ccb97fb8a891074d4fb4",
"text": "OBJECTIVE\nThis study investigated the effects of school-based occupational therapy services on students' handwriting.\n\n\nMETHOD\nStudents 7 to 10 years of age with poor handwriting legibility who received direct occupational therapy services (n = 29) were compared with students who did not receive services (n = 9) on handwriting legibility and speed and associated performance components. Visual-motor, visual-perception, in-hand manipulation, and handwriting legibility and speed were measured at the beginning and end of the academic year. The intervention group received a mean of 16.4 sessions and 528 min of direct occupational therapy services during the school year. According to the therapists, visual-motor skills and handwriting practice were emphasized most in intervention.\n\n\nRESULTS\nStudents in the intervention group showed significant increases in in-hand manipulation and position in space scores. They also improved more in handwriting legibility scores than the students in the comparison group. Fifteen students in the intervention group demonstrated greater than 90% legibility at the end of the school year. On average, legibility increased by 14.2% in the students who received services and by 5.8% in the students who did not receive services. Speed increased slightly more in the students who did not receive services.\n\n\nCONCLUSION\nStudents who received occupational therapy services demonstrated improved letter legibility, but speed and numeral legibility did not demonstrate positive intervention effects.",
"title": ""
},
{
"docid": "38bea301ed3ad1ef99893d0ab84a94d1",
"text": "Artificial barriers, such as nest boxes and metal collars, are sometimes used, with variable success, to exclude predators and/or competitors from tree nests of vulnerable bird species. This paper describes the observed response of captive stoats (Mustela erminea) to a nest box design and an aluminium sheet collar used to protect kaka (Nestor meridionalis) nest cavities. The nest box, a prototype for kaka, was manufactured from PVC pipe. Initial trials failed to exclude stoats until an overhanging roof was added. All subsequent trials successfully prevented access by stoats. Trials with a 590 mm wide aluminium collar were less successful, but this was mainly due to restrictions enforced by enclosure design: Stoats gained access above the collar via the enclosure walls and ceiling. In only one of twelve trials was a stoat able to climb past the collar itself. The conservation implications of these trials and directions for future research are discussed. __________________________________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "2429d1f41067ca2c615c4f36223815bd",
"text": "An approach for producing a long pulse up to 100 ns is presented. The generator based on this approach consists of a Tesla transformer and a set of pulse-forming networks (PFNs). The Tesla transformer is used to charge pulse-forming lines (PFLs) and PFNs which are in parallel. When the voltage increases to a certain value, the main switch will close, and the PFLs and PFNs will discharge rapidly to the load. Therefore, a high-voltage long pulse is formed on the load. The amplitude of this pulse is dependent only on the charging voltage and the matching state between the load and the PFL (PFN). The pulsewidth is determined by the transmission time of the PFL and PFN. The rise time is determined by the working state of the main switch and the impedance of the PFL and is independent of the parameters of the PFN. The PFN is multistage and assembled in series. The single-stage PFN is formed with ceramic capacitors placed between two unclosed annular plates. The total series impedance is equal to the sum of every single-stage PFN's impedance. A nine-stage PFN is used in the generator, and the total impedance is 40 Omega. Experimental results show that a high voltage of an amplitude of 300 kV, current of 6.9 kA, and duration of 110 ns is obtained at a repetition rate of 10 Hz, with a rise time of approximately 7 ns.",
"title": ""
},
{
"docid": "f1a0e58c3417f8078dc0cc97577dda93",
"text": "This discussion takes the position that information retrieval systems are fundamentally linguistic in nature in essence, the languages of document representation and searching are dialects of natural language. Because of this, the discipline of the Philosophy of Language should have some bearing on the problems of document representation and search query formulation. The philosophies of Austin, Searle, Grice and Wittgenstein are briefly examined and their relevance to information retrieval theory is discussed.",
"title": ""
},
{
"docid": "9d4b2097e2e86e8392bc90876a1db782",
"text": "With the recent amendment in Brazilian law, where possession of files containing child pornography is now considered a crime, the need to detect this type of content at crime scenes increased. This paper presents the NuDetective Forensic Tool, which was developed in order to assist forensic examiners to conduct such analysis in a timely manner at the crime scene. This Tool performs automatic detection of nudity in images and also performs analysis of file names. Two evaluation experiments of the Tool were performed and showed detection rates around 95%, with low rates of false positives, combined with fast processing.",
"title": ""
},
{
"docid": "10d69148c3a419e4ffe3bf1ca4c7c9d7",
"text": "Discovering object classes from images in a fully unsupervised way is an intrinsically ambiguous task; saliency detection approaches however ease the burden on unsupervised learning. We develop an algorithm for simultaneously localizing objects and discovering object classes via bottom-up (saliency-guided) multiple class learning (bMCL), and make the following contributions: (1) saliency detection is adopted to convert unsupervised learning into multiple instance learning, formulated as bottom-up multiple class learning (bMCL); (2) we utilize the Discriminative EM (DiscEM) to solve our bMCL problem and show DiscEM's connection to the MIL-Boost method[34]; (3) localizing objects, discovering object classes, and training object detectors are performed simultaneously in an integrated framework; (4) significant improvements over the existing methods for multi-class object discovery are observed. In addition, we show single class localization as a special case in our bMCL framework and we also demonstrate the advantage of bMCL over purely data-driven saliency methods.",
"title": ""
}
] |
scidocsrr
|
5cecc56ba503df63909e1d87e9fde43d
|
Solving the dynamic Vehicle Routing Problem using genetic algorithms
|
[
{
"docid": "3a3d6fecb580c2448c21838317aec3e2",
"text": "The Vehicle Routing Problem with Time windows (VRPTW) is an extension of the capacity constrained Vehicle Routing Problem (VRP). The VRPTW is NP-Complete and instances with 100 customers or more are very hard to solve optimally. We represent the VRPTW as a multi-objective problem and present a genetic algorithm solution using the Pareto ranking technique. We use a direct interpretation of the VRPTW as a multi-objective problem, in which the two objective dimensions are number of vehicles and total cost (distance). An advantage of this approach is that it is unnecessary to derive weights for a weighted sum scoring formula. This prevents the introduction of solution bias towards either of the problem dimensions. We argue that the VRPTW is most naturally viewed as a multi-objective problem, in which both vehicles and cost are of equal value, depending on the needs of the user. A result of our research is that the multi-objective optimization genetic algorithm returns a set of solutions that fairly consider both of these dimensions. Our approach is quite effective, as it provides solutions competitive with the best known in the literature, as well as new solutions that are not biased toward the number of vehicles. A set of well-known benchmark data are used to compare the effectiveness of the proposed method for solving the VRPTW.",
"title": ""
}
] |
[
{
"docid": "c0fd5f1adeb0611f5db7c756eb395c85",
"text": "Following disastrous earthquakes in Alaska and in Niigata, Japan in 1964, Professors H. B. Seed and I. M. Idriss developed and published a methodology termed the ‘‘simplified procedure’’ for evaluating liquefaction resistance of soils. This procedure has become a standard of practice throughout North America and much of the world. The methodology which is largely empirical, has evolved over years, primarily through summary papers by H. B. Seed and his colleagues. No general review or update of the procedure has occurred, however, since 1985, the time of the last major paper by Professor Seed and a report from a National Research Council workshop on liquefaction of soils. In 1996 a workshop sponsored by the National Center for Earthquake Engineering Research (NCEER) was convened by Professors T. L. Youd and I. M. Idriss with 20 experts to review developments over the previous 10 years. The purpose was to gain consensus on updates and augmentations to the simplified procedure. The following topics were reviewed and recommendations developed: (1) criteria based on standard penetration tests; (2) criteria based on cone penetration tests; (3) criteria based on shear-wave velocity measurements; (4) use of the Becker penetration test for gravelly soil; (4) magnitude scaling factors; (5) correction factors for overburden pressures and sloping ground; and (6) input values for earthquake magnitude and peak acceleration. Probabilistic and seismic energy analyses were reviewed but no recommendations were formulated. JOURNAL OF GEOTECHNICAL AND GEOENVIRONMENTAL ENGINEERING / OCTOBER 2001 / 817 This Summary Report, originally published in April 2001, is being ublished so that the contribution of all workshop participants as aurs can be officially recognized. The original version listed only two thors, plus a list of 19 workshop participants. This was incorrect; all individuals should have been identified as authors. ASCE deeply reets the error. Prof., Brigham Young Univ., Provo, UT 84602. Prof., Univ. of California at Davis, Davis, CA 95616. Prof., Clemson Univ., Clemson, SC 29634-0911; formerly, Nat. Inst. Standards and Technol., Gaithersburg, MD. Bechtel Corp., San Francisco, CA 94119-3965. PhD, GEI Consultants, Inc., Winchester, MA 01890. PhD, Engrg. Consultant, Waban, MA 02468-1103. Prof., Rensselaer Polytechnic Inst., Troy, NY 12180. Prof., Univ. of British Columbia, Vancouver, BC, Canada. California Dept. of Water Resour., Sacramento, CA 94236-0001. U.S. Army Engr. Wtrwy. Experiment Station, Vicksburg, MS 39180. Prof., Sci. Univ. of Tokyo, Tokyo, Japan. U.S. Army Engr. Wtrwy. Experiment Station, Vicksburg, MS 39180. Parsons Brinckerhoff, Boston, MA 02116. PhD, U.S. Army Engr. Wtrwy. Experiment Station, Vicksburg, MS 39180. Prof., Univ. of Southern California, Los Angeles, CA 90089-2531. Prof., Virginia Polytechnic Inst., Blacksburg, VA 24061. PhD, Prin., Geomatrix Consultants, Santa Ana, CA 94612. Geomatrix Consultants, Oakland, CA 94612. Prof., Univ. of Alberta, Edmonton, Alberta, Canada. Prof., Univ. of California, Berkeley, CA 94720. Prof., Univ. of Texas at Austin, Austin, TX 78712. Note. Discussion open until March 1, 2002. To extend the closing date one month, a written request must be filed with the ASCE Manager of Journals. The manuscript for this paper was submitted for review and possible publication on January 18, 2000; revised November 14, 2000. This paper is part of the Journal of Geotechnical and Geoenvironmental Engineering, Vol. 127, No. 10, October, 2001. qASCE, ISSN 10900241/01/0010-0817–0833/$8.00 1 $.50 per page. Paper No. 22223. Downloaded 31 Oct 2010 to 143.89.22.104. Redistribution subject to ASCE license or copyright. Visithttp://www.ascelibrary.org",
"title": ""
},
{
"docid": "1450854a32ea6c18f4cc817f686aaf15",
"text": "This article reports on the development of two measures relating to historical trauma among American Indian people: The Historical Loss Scale and The Historical Loss Associated Symptoms Scale. Measurement characteristics including frequencies, internal reliability, and confirmatory factor analyses were calculated based on 143 American Indian adult parents of children aged 10 through 12 years who are part of an ongoing longitudinal study of American Indian families in the upper Midwest. Results indicate both scales have high internal reliability. Frequencies indicate that the current generation of American Indian adults have frequent thoughts pertaining to historical losses and that they associate these losses with negative feelings. Two factors of the Historical Loss Associated Symptoms Scale indicate one anxiety/depression component and one anger/avoidance component. The results are discussed in terms of future research and theory pertaining to historical trauma among American Indian people.",
"title": ""
},
{
"docid": "3c58e5fa9c216edc12533f0ca13bb44d",
"text": "Nanocelluloses, including nanocrystalline cellulose, nanofibrillated cellulose and bacterial cellulose nanofibers, have become fascinating building blocks for the design of new biomaterials. Derived from the must abundant and renewable biopolymer, they are drawing a tremendous level of attention, which certainly will continue to grow in the future driven by the sustainability trend. This growing interest is related to their unsurpassed quintessential physical and chemical properties. Yet, owing to their hydrophilic nature, their utilization is restricted to applications involving hydrophilic or polar media, which limits their exploitation. With the presence of a large number of chemical functionalities within their structure, these building blocks provide a unique platform for significant surface modification through various chemistries. These chemical modifications are prerequisite, sometimes unavoidable, to adapt the interfacial properties of nanocellulose substrates or adjust their hydrophilic-hydrophobic balance. Therefore, various chemistries have been developed aiming to surface-modify these nano-sized substrates in order to confer to them specific properties, extending therefore their use to highly sophisticated applications. This review collocates current knowledge in the research and development of nanocelluloses and emphasizes more particularly on the chemical modification routes developed so far for their functionalization.",
"title": ""
},
{
"docid": "8954672b2e2b6351abfde0747fd5d61c",
"text": "Sentiment Analysis (SA), an application of Natural Language processing (NLP), has been witnessed a blooming interest over the past decade. It is also known as opinion mining, mood extraction and emotion analysis. The basic in opinion mining is classifying the polarity of text in terms of positive (good), negative (bad) or neutral (surprise). Mood Extraction automates the decision making performed by human. It is the important aspect for capturing public opinion about product preferences, marketing campaigns, political movements, social events and company strategies. In addition to sentiment analysis for English and other European languages, this task is applied on various Indian languages like Bengali, Hindi, Telugu and Malayalam. This paper describes the survey on main approaches for performing sentiment extraction.",
"title": ""
},
{
"docid": "cc9741eb6e5841ddf10185578f26a077",
"text": "The context of prepaid mobile telephony is specific in the way that customers are not contractually linked to their operator and thus can cease their activity without notice. In order to estimate the retention efforts which can be engaged towards each individual customer, the operator must distinguish the customers presenting a strong churn risk from the other. This work presents a data mining application leading to a churn detector. We compare artificial neural networks (ANN) which have been historically applied to this problem, to support vectors machines (SVM) which are particularly effective in classification and adapted to noisy data. Thus, the objective of this article is to compare the application of SVM and ANN to churn detection in prepaid cellular telephony. We show that SVM gives better results than ANN on this specific problem.",
"title": ""
},
{
"docid": "0d7b24e5676281f1e6dae9941f019a7e",
"text": "Determining patterns in data is an important and often difficult task for scientists and students. Unfortunately, graphing and analysis software typically is largely inaccessible to users with vision impairment. Using sound to represent data (i.e., sonification or auditory graphs) can make data analysis more accessible; however, there are few guidelines for designing such displays for maximum effectiveness. One crucial yet understudied design issue is exactly how changes in data (e.g., temperature) are mapped onto changes in sound (e.g., pitch), and how this may depend on the specific user. In this study, magnitude estimation was used to determine preferred data-to-display mappings, polarities, and psychophysical scaling functions relating data values to underlying acoustic parameters (frequency, tempo, or modulation index) for blind and visually impaired listeners. The resulting polarities and scaling functions are compared to previous results with sighted participants. There was general agreement about polarities obtained with the two listener populations, with some notable exceptions. There was also evidence for strong similarities regarding the magnitudes of the slopes of the scaling functions, again with some notable differences. For maximum effectiveness, sonification software designers will need to consider carefully their intended users’ vision abilities. Practical implications and limitations are discussed.",
"title": ""
},
{
"docid": "7ba8492090482fe9d05d5adcea23a120",
"text": "The sequential minimal optimization (SMO) algorithm has been widely used for training the support vector machine (SVM). In this paper, we present the first chip design for sequential minimal optimization. This chip is implemented as an intellectual property (IP) core, suitable to be utilized in an SVM-based recognition system on a chip. The proposed SMO chip has been tested to be fully functional, using a prototype system based on the Altera DE2 board with Cyclone II 2C70 FPGA (field-programmable gate array).",
"title": ""
},
{
"docid": "4dd28201b87acf7705ea91f9e9e4a330",
"text": "Because individual crowd workers often exhibit high variance in annotation accuracy, we often ask multiple crowd workers to label each example to infer a single consensus label. While simple majority vote computes consensus by equally weighting each worker’s vote, weighted voting assigns greater weight to more accurate workers, where accuracy is estimated by inner-annotator agreement (unsupervised) and/or agreement with known expert labels (supervised). In this paper, we investigate the annotation cost vs. consensus accuracy benefit from increasing the amount of expert supervision. To maximize benefit from supervision, we propose a semi-supervised approach which infers consensus labels using both labeled and unlabeled examples. We compare our semi-supervised approach with several existing unsupervised and supervised baselines, evaluating on both synthetic data and Amazon Mechanical Turk data. Results show (a) a very modest amount of supervision can provide significant benefit, and (b) consensus accuracy from full supervision with a large amount of labeled data is matched by our semi-supervised approach with much less supervision.",
"title": ""
},
{
"docid": "0f6806c44bf6fa7e6a2c3fb02ef8781b",
"text": "Air quality has been negatively affected by industrial activities, which have caused imbalances in nature. The issue of air pollution has become a big concern for many people, especially those living in industrial areas. Air pollution levels can be measured using smart sensors. Additionally, Internet of Things (IoT) technology can be integrated to remotely detect pollution without any human interaction. The data gathered by such a system can be transmitted instantly to a web-based application to facilitate monitoring real time data and allow immediate risk management. In this paper, we describe an entire Internet of Things (IoT) system that monitors air pollution by collecting real-time data in specific locations. This data is analyzed and measured against a predetermined threshold. The collected data is sent to the concerned official organization to notify them in case of any violation so that they can take the necessary measures. Furthermore, if the value of the measured pollutants exceeds the threshold, an alarm system is triggered taking several actions to warn the surrounding people.",
"title": ""
},
{
"docid": "2a8f2e8e4897f03c89d9e8a6bf8270f3",
"text": "BACKGROUND\nThe aging of the population is an inexorable change that challenges governments and societies in every developed country. Based on clinical and empirical data, social isolation is found to be prevalent among elderly people, and it has negative consequences on the elderly's psychological and physical health. Targeting social isolation has become a focus area for policy and practice. Evidence indicates that contemporary information and communication technologies (ICT) have the potential to prevent or reduce the social isolation of elderly people via various mechanisms.\n\n\nOBJECTIVE\nThis systematic review explored the effects of ICT interventions on reducing social isolation of the elderly.\n\n\nMETHODS\nRelevant electronic databases (PsycINFO, PubMed, MEDLINE, EBSCO, SSCI, Communication Studies: a SAGE Full-Text Collection, Communication & Mass Media Complete, Association for Computing Machinery (ACM) Digital Library, and IEEE Xplore) were systematically searched using a unified strategy to identify quantitative and qualitative studies on the effectiveness of ICT-mediated social isolation interventions for elderly people published in English between 2002 and 2015. Narrative synthesis was performed to interpret the results of the identified studies, and their quality was also appraised.\n\n\nRESULTS\nTwenty-five publications were included in the review. Four of them were evaluated as rigorous research. Most studies measured the effectiveness of ICT by measuring specific dimensions rather than social isolation in general. ICT use was consistently found to affect social support, social connectedness, and social isolation in general positively. The results for loneliness were inconclusive. Even though most were positive, some studies found a nonsignificant or negative impact. More importantly, the positive effect of ICT use on social connectedness and social support seemed to be short-term and did not last for more than six months after the intervention. The results for self-esteem and control over one's life were consistent but generally nonsignificant. ICT was found to alleviate the elderly's social isolation through four mechanisms: connecting to the outside world, gaining social support, engaging in activities of interests, and boosting self-confidence.\n\n\nCONCLUSIONS\nMore well-designed studies that contain a minimum risk of research bias are needed to draw conclusions on the effectiveness of ICT interventions for elderly people in reducing their perceived social isolation as a multidimensional concept. The results of this review suggest that ICT could be an effective tool to tackle social isolation among the elderly. However, it is not suitable for every senior alike. Future research should identify who among elderly people can most benefit from ICT use in reducing social isolation. Research on other types of ICT (eg, mobile phone-based instant messaging apps) should be conducted to promote understanding and practice of ICT-based social-isolation interventions for elderly people.",
"title": ""
},
{
"docid": "77872917746b9d177273f178f6e6b0e4",
"text": "Ultra wideband (UWB) technology based primarily on the impulse radio paradigm has a huge potential for revolutionizing the world of digital communications especially wireless communications. UWB provides the integrated capabilities of data communications, advanced radar and precision tracking, location, imperceptibility and low power operation. It is therefore ideally suited for the development of robust and rapid wireless networks in complex and hostile environments. The distinct physical layer properties of the UWB technology warrants efficient design of medium access control (MAC) protocols. This paper introduces the unique UWB physical characteristics compared to the existing wireless technologies and discusses current research on MAC protocols for UWB. This report surveys most of the MAC protocols proposed so far for UWB, and may instigate further activities on this important and evolving technology.",
"title": ""
},
{
"docid": "7410cc6d6335d7bfc4b720ac429d0e85",
"text": "This paper provides examples from the last fifty years of scientific and technological innovations that provide relatively easy, quick and affordable means of addressing key water management issues. Scientific knowledge and technological innovation can help open up previously closed decision-making systems. Four of these tools are discussed in this paper: a) the opportunities afforded by virtual water trade; b) the silent revolution for beneficial use of groundwater; c) salt water desalination; and finally, d) the use of remote sensing and geographic information systems (GIS). Together these advances are changing the options available to address water and food security that have been predominant for centuries in the minds of most water decision-makers.",
"title": ""
},
{
"docid": "5f39c5df4127824b3408e2b34f000bee",
"text": "Objective To evaluate the information, its source, beliefs an d perceptions of acne patients regarding acne and their expectations about treatment. Patients and methods All acne patients visiting Dermatology outpatient c lini at WAPDA Teaching Hospital Complex, Lahore and at private practice fo r management were asked to fill a voluntary questionnaire containing information about patients ’ beliefs and perception about acne. Grading was done by a dermatologist. Result 449 patients completed the pro forma. Males were 37 % and females 63%. 54.1% of patients waited for one year to have treatment. More than 60 % thought acne as a curable disease and more than 50% expected it to clear in 2-4 weeks. Most of them decided themselves to visit the doctor or were influenced by their parents. Most of them gath ered information regarding acne from close relatives and friends. Infection and poor hygiene ( less washing of face with soap) was thought to be the most important cause. Facial masks and lotions were most commonly tried non-prescription acne products. 45% thought that acne had a severe i mpact on their self-image. Topical treatment was the most desired one. More than 40% of patients had grade IV acne and there was no significant difference between males and females re garding grade wise presentation. Conclusion Community-based health education program is require d to increase the awareness about acne and to resolve the misconceptions.",
"title": ""
},
{
"docid": "b0ac318eea1dc5f6feb9fdaf5f554752",
"text": "In this paper an RSA calculation architecture is proposed for FPGAs that addresses the issues of scalability, flexible performance, and silicon efficiency for the hardware acceleration of Public Key crypto systems. Using techniques based around Montgomery math for exponentiation, the proposed RSA calculation architecture is compared to existing FPGA-based solutions for speed, FPGA utilisation, and scalability. The paper will cover the RSA encryption algorithm, Montgomery math, basic FPGA technology, and the implementation details of the proposed RSA calculation architecture. Conclusions will be drawn, beyond the singular improvements over existing architectures, which highlight the advantages of a fully flexible & parameterisable design.",
"title": ""
},
{
"docid": "cb702c48a242c463dfe1ac1f208acaa2",
"text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.",
"title": ""
},
{
"docid": "3d3cfac5b8e1bf5099c0039402696916",
"text": "As long as athletes strive to attain optimal performance states and consistently reach high performance goals, psychological interventions will be used to assist in the development of skill and the maintenance of performance. In the pursuit of these goals, newer evidence-driven models based on mindfulnessand acceptance-based approaches have been designed to achieve these ends. Based upon questionable efficacy data for traditional psychological skills training procedures that emphasize reduction or control of internal processes, mindfulnessand acceptance-based approaches develop skills of nonjudging mindful awareness, mindful attention, and experiential acceptance to aid in the pursuit of valued goals. The most formalized and researched mindfulnessand acceptance-based approach within sport psychology is the manualized Mindfulness-Acceptance-Commitment (MAC) protocol. In the 8 years since the MAC was first developed and presented, and the 5 years since the first publication on the protocol, the MAC program has accumulated a continually growing empirical base for both its underlying theory and intervention efficacy as a performance enhancement intervention. This article reviews the empirical and theoretical foundations of the mindfulnessand acceptance-based approaches in general, and MAC in particular; reviews the accumulated empirical findings in support of the MAC approach for performance enhancement; and presents recent MAC developments and suggested future directions.",
"title": ""
},
{
"docid": "2098191fad9a065bcb117f6cd7299dd7",
"text": "The growth of both IT technology and the Internet Communication has involved the development of lot of encrypted information. Among others techniques of message hiding, stenography is one them but more suspicious as no one cannot see the secret message. As we always use the MS Office, there are many ways to hide secret messages by using PowerPoint as normal file. In this paper, we propose a new technique to find a hidden message by analysing the in PowerPoint file using EnCase Transcript. The result analysis shows that Steganography technique had hidden a certain number of message which are invisible to naked eye.",
"title": ""
},
{
"docid": "dcdaeb7c1da911d0b1a2932be92e0fb4",
"text": "As computational agents are increasingly used beyond research labs, their success will depend on their ability to learn new skills and adapt to their dynamic, complex environments. If human users—without programming skills— can transfer their task knowledge to agents, learning can accelerate dramatically, reducing costly trials. The tamer framework guides the design of agents whose behavior can be shaped through signals of approval and disapproval, a natural form of human feedback. More recently, tamer+rl was introduced to enable human feedback to augment a traditional reinforcement learning (RL) agent that learns from a Markov decision process’s (MDP) reward signal. We address limitations of prior work on tamer and tamer+rl, contributing in two critical directions. First, the four successful techniques for combining human reward with RL from prior tamer+rl work are tested on a second task, and these techniques’ sensitivities to parameter changes are analyzed. Together, these examinations yield more general and prescriptive conclusions to guide others who wish to incorporate human knowledge into an RL algorithm. Second, tamer+rl has thus far been limited to a sequential setting, in which training occurs before learning from MDP reward. In this paper, we introduce a novel algorithm that shares the same spirit as tamer+rl but learns simultaneously from both reward sources, enabling the human feedback to come at any time during the reinforcement learning process. We call this algorithm simultaneous tamer+rl. To enable simultaneous learning, we introduce a new technique that appropriately determines the magnitude of the human model’s influence on the RL algorithm throughout time and state-action space.",
"title": ""
},
{
"docid": "f3f15a37a1d1a2a3a3647dc14f075297",
"text": "Stress is known to inhibit neuronal growth in the hippocampus. In addition to reducing the size and complexity of the dendritic tree, stress and elevated glucocorticoid levels are known to inhibit adult neurogenesis. Despite the negative effects of stress hormones on progenitor cell proliferation in the hippocampus, some experiences which produce robust increases in glucocorticoid levels actually promote neuronal growth. These experiences, including running, mating, enriched environment living, and intracranial self-stimulation, all share in common a strong hedonic component. Taken together, the findings suggest that rewarding experiences buffer progenitor cells in the dentate gyrus from the negative effects of elevated stress hormones. This chapter considers the evidence that stress and glucocorticoids inhibit neuronal growth along with the paradoxical findings of enhanced neuronal growth under rewarding conditions with a view toward understanding the underlying biological mechanisms.",
"title": ""
},
{
"docid": "7314ac0f034aa2984dc99666997eb319",
"text": "This paper examines consumer adoption of a new electronic payment service, mobile payments. The empirical data for the explorative study was collected by establishing six focus group sessions. The results suggest that the relative advantages of mobile payments include time and place independence, availability, possibilities for remote purchases, and queue avoidance. The interviewees found mobile payments to be mostly compatible with digital content and service purchases and to complement small value cash payments. Interestingly, the findings suggest that the relative advantages of mobile payments depend on certain situational factors such as lack of other payment methods or urgency. There are, however, several barriers to the adoption of mobile payments, including premium pricing of the payments, complexity of payment procedures, a lack of widespread merchant acceptance, and perceived risks.",
"title": ""
}
] |
scidocsrr
|
50de4fab20e9ea788b97833899d88786
|
A stretchable and screen-printed electrochemical sensor for glucose determination in human perspiration.
|
[
{
"docid": "63efc8aecf9b28b2a2bbe4514ed3a7fe",
"text": "Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the statistics and chemometrics for analytical chemistry book. You can open the device and get the book by on-line.",
"title": ""
}
] |
[
{
"docid": "f3641cacf284444ac45f0e085c7214bf",
"text": "Recognition that the entire central nervous system (CNS) is highly plastic, and that it changes continually throughout life, is a relatively new development. Until very recently, neuroscience has been dominated by the belief that the nervous system is hardwired and changes at only a few selected sites and by only a few mechanisms. Thus, it is particularly remarkable that Sir John Eccles, almost from the start of his long career nearly 80 years ago, focused repeatedly and productively on plasticity of many different kinds and in many different locations. He began with muscles, exploring their developmental plasticity and the functional effects of the level of motor unit activity and of cross-reinnervation. He moved into the spinal cord to study the effects of axotomy on motoneuron properties and the immediate and persistent functional effects of repetitive afferent stimulation. In work that combined these two areas, Eccles explored the influences of motoneurons and their muscle fibers on one another. He studied extensively simple spinal reflexes, especially stretch reflexes, exploring plasticity in these reflex pathways during development and in response to experimental manipulations of activity and innervation. In subsequent decades, Eccles focused on plasticity at central synapses in hippocampus, cerebellum, and neocortex. His endeavors extended from the plasticity associated with CNS lesions to the mechanisms responsible for the most complex and as yet mysterious products of neuronal plasticity, the substrates underlying learning and memory. At multiple levels, Eccles' work anticipated and helped shape present-day hypotheses and experiments. He provided novel observations that introduced new problems, and he produced insights that continue to be the foundation of ongoing basic and clinical research. This article reviews Eccles' experimental and theoretical contributions and their relationships to current endeavors and concepts. It emphasizes aspects of his contributions that are less well known at present and yet are directly relevant to contemporary issues.",
"title": ""
},
{
"docid": "c355dc8d0ec6b673cea3f2ab39d13701",
"text": "Errors in estimating and forecasting often result from the failure to collect and consider enough relevant information. We examine whether attributes associated with persistence in information acquisition can predict performance in an estimation task. We focus on actively open-minded thinking (AOT), need for cognition, grit, and the tendency to maximize or satisfice when making decisions. In three studies, participants made estimates and predictions of uncertain quantities, with varying levels of control over the amount of information they could collect before estimating. Only AOT predicted performance. This relationship was mediated by information acquisition: AOT predicted the tendency to collect information, and information acquisition predicted performance. To the extent that available information is predictive of future outcomes, actively open-minded thinkers are more likely than others to make accurate forecasts.",
"title": ""
},
{
"docid": "12932a77e9fabb8273175a6ca8fc5f49",
"text": "There are nearly a million known species of flying insects and 13 000 species of flying warm-blooded vertebrates, including mammals, birds and bats. While in flight, their wings not only move forward relative to the air, they also flap up and down, plunge and sweep, so that both lift and thrust can be generated and balanced, accommodate uncertain surrounding environment, with superior flight stability and dynamics with highly varied speeds and missions. As the size of a flyer is reduced, the wing-to-body mass ratio tends to decrease as well. Furthermore, these flyers use integrated system consisting of wings to generate aerodynamic forces, muscles to move the wings, and sensing and control systems to guide and manoeuvre. In this article, recent advances in insect-scale flapping-wing aerodynamics, flexible wing structures, unsteady flight environment, sensing, stability and control are reviewed with perspective offered. In particular, the special features of the low Reynolds number flyers associated with small sizes, thin and light structures, slow flight with comparable wind gust speeds, bioinspired fabrication of wing structures, neuron-based sensing and adaptive control are highlighted.",
"title": ""
},
{
"docid": "73f9c6fc5dfb00cc9b05bdcd54845965",
"text": "The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models.",
"title": ""
},
{
"docid": "ba3bdb8bc6831fd3df737a24b7656b12",
"text": "I ntegrated circuit processing technology offers increasing integration density, which fuels microprocessor performance growth. Within 10 years it will be possible to integrate a billion transistors on a reasonably sized silicon chip. At this integration level, it is necessary to find parallelism to effectively utilize the transistors. Currently, processor designs dynamically extract parallelism with these transistors by executing many instructions within a single, sequential program in parallel. To find independent instructions within a sequential sequence of instructions, or thread of control, today's processors increasingly make use of sophisticated architectural features. Examples are out-of-order instruction execution and speculative execution of instructions after branches predicted with dynamic hardware branch prediction techniques. Future performance improvements will require processors to be enlarged to execute more instructions per clock cycle. 1 However, reliance on a single thread of control limits the parallelism available for many applications, and the cost of extracting parallelism from a single thread is becoming prohibitive. This cost manifests itself in numerous ways, including increased die area and longer design and verification times. In general, we see diminishing returns when trying to extract parallelism from a single thread. To continue this trend will trade only incremental performance increases for large increases in overall complexity. Although this parallelization might be achieved dynamically in hardware, we advocate using a software approach instead, allowing the hardware to be simple and fast. Emerging parallel compilation technologies , 2 an increase in the use of inherently parallel applications (such as multimedia), and more widespread use of multitasking operating systems should make this feasible. Researchers have proposed two alternative microar-chitectures that exploit multiple threads of control: simultaneous multithreading (SMT) 3 and chip multi-processors (CMP). 4 SMT processors augment wide (issuing many instructions at once) superscalar processors with hardware that allows the processor to execute instructions from multiple threads of control concurrently when possible, dynamically selecting and executing instructions from many active threads simultaneously. This promotes much higher utilization of the processor's execution resources and provides latency tolerance in case a thread stalls due to cache misses or data dependencies. When multiple threads are not available, however, the SMT simply looks like a conventional wide-issue superscalar. CMPs use relatively simple single-thread processor cores to exploit only moderate amounts of parallelism within any one thread, while executing multiple threads in parallel across multiple processor cores. If an application cannot be effectively decomposed into threads, CMPs will be underutilized. From a …",
"title": ""
},
{
"docid": "1541b49d4f8cade557d6944eb79e36c9",
"text": "In recent years, a plethora of approaches have been proposed to deal with the increasingly challenging task of multi-output regression. This paper provides a survey on state-of-the-art multi-output regression methods, that are categorized as problem transformation and algorithm adaptation methods. In addition, we present the mostly used performance evaluation measures, publicly available data sets for multi-output regression real-world problems, as well as open-source software frameworks.",
"title": ""
},
{
"docid": "2f20e5792104b67143b7dcc43954317e",
"text": "Resource Description Framework (RDF) was designed with the initial goal of developing metadata for the Internet. While the Internet is a conglomeration of many interconnected networks and computers, most of today's best RDF storage solutions are confined to a single node. Working on a single node has significant scalability issues, especially considering the magnitude of modern day data. In this paper we introduce a scalable RDF data management system that uses Accumulo, a Google Bigtable variant. We introduce storage methods, indexing schemes, and query processing techniques that scale to billions of triples across multiple nodes, while providing fast and easy access to the data through conventional query mechanisms such as SPARQL. Our performance evaluation shows that in most cases, our system outperforms existing distributed RDF solutions, even systems much more complex than ours.",
"title": ""
},
{
"docid": "d9cdbff5533837858b1cd8334acd128d",
"text": "A four-leaf steel spring used in the rear suspension system of light vehicles is analyzed using ANSYS V5.4 software. The finite element results showing stresses and deflections verified the existing analytical and experimental solutions. Using the results of the steel leaf spring, a composite one made from fiberglass with epoxy resin is designed and optimized using ANSYS. Main consideration is given to the optimization of the spring geometry. The objective was to obtain a spring with minimum weight that is capable of carrying given static external forces without failure. The design constraints were stresses (Tsai–Wu failure criterion) and displacements. The results showed that an optimum spring width decreases hyperbolically and the thickness increases linearly from the spring eyes towards the axle seat. Compared to the steel spring, the optimized composite spring has stresses that are much lower, the natural frequency is higher and the spring weight without eye units is nearly 80% lower. 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b85112d759d9facedacb3935ce2d0de5",
"text": "Internet is one of the primary sources of Big Data. Rise of the social networking platforms are creating enormous amount of data in every second where human emotions are constantly expressed in real-time. The sentiment behind each post, comments, likes can be found using opinion mining. It is possible to determine business values from these objects and events if sentiment analysis is done on the huge amount of data. Here, we have chosen FOODBANK which is a very popular Facebook group in Bangladesh; to analyze sentiment of the data to find out their market values.",
"title": ""
},
{
"docid": "3654827519075eac6bfe5ee442c6d4b2",
"text": "We examined the relations among phonological awareness, music perception skills, and early reading skills in a population of 100 4- and 5-year-old children. Music skills were found to correlate significantly with both phonological awareness and reading development. Regression analyses indicated that music perception skills contributed unique variance in predicting reading ability, even when variance due to phonological awareness and other cognitive abilities (math, digit span, and vocabulary) had been accounted for. Thus, music perception appears to tap auditory mechanisms related to reading that only partially overlap with those related to phonological awareness, suggesting that both linguistic and nonlinguistic general auditory mechanisms are involved in reading.",
"title": ""
},
{
"docid": "ac885eedad9c777e2980460d987c7cfb",
"text": "BACKGROUND\nOne of the greatest problems for India is undernutrition among children. The country is still struggling with this problem. Malnutrition, the condition resulting from faulty nutrition, weakens the immune system and causes significant growth and cognitive delay. Growth assessment is the measurement that best defines the health and nutritional status of children, while also providing an indirect measurement of well-being for the entire population.\n\n\nMETHODS\nA cross-sectional study, in which we explored nutritional status in school-age slum children and analyze factors associated with malnutrition with the help of a pre-designed and pre-tested questionnaire, anthropometric measurements and clinical examination from December 2010 to April 2011 in urban slums of Bareilly, Uttar-Pradesh (UP), India.\n\n\nRESULT\nThe mean height and weight of boys and girls in the study group was lower than the CDC 2000 (Centers for Disease Control and Prevention) standards in all age groups. Regarding nutritional status, prevalence of stunting and underweight was highest in age group 11 yrs to 13 yrs whereas prevalence of wasting was highest in age group 5 yrs to 7 yrs. Except refractive errors all illnesses are more common among girls, but this gender difference is statistically significant only for anemia and rickets. The risk of malnutrition was significantly higher among children living in joint families, children whose mother's education was [less than or equal to] 6th standard and children with working mothers.\n\n\nCONCLUSIONS\nMost of the school-age slum children in our study had a poor nutritional status. Interventions such as skills-based nutrition education, fortification of food items, effective infection control, training of public healthcare workers and delivery of integrated programs are recommended.",
"title": ""
},
{
"docid": "0769e3e0ab83c6884b73baa4a60e5db1",
"text": "We introduce a novel physical layer scheme for single user Multiple-Input Multiple-Output (MIMO) communications based on unsupervised deep learning using an autoencoder. This method extends prior work on the joint optimization of physical layer representation and encoding and decoding processes as a single end-to-end task by expanding transmitter and receivers to the multi-antenna case. We introduce a widely used domain appropriate wireless channel impairment model (Rayleigh fading channel), into the autoencoder optimization problem in order to directly learn a system which optimizes for it. We considered both spatial diversity and spatial multiplexing techniques in our implementation. Our deep learning-based approach demonstrates significant potential for learning schemes which approach and exceed the performance of the methods which are widely used in existing wireless MIMO systems. We discuss how the proposed scheme can be easily adapted for open-loop and closed-loop operation in spatial diversity and multiplexing modes and extended use with only compact binary channel state information (CSI) as feedback.",
"title": ""
},
{
"docid": "c1aa687c4a48cfbe037fe87ed4062dab",
"text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.",
"title": ""
},
{
"docid": "5b3289254669f6891c05d3d0c70a056e",
"text": "Building change detection is a major issue for urban area monitoring. Due to different imaging conditions and sensor parameters, 2-D information delivered by satellite images from different dates is often not sufficient when dealing with building changes. Moreover, due to the similar spectral characteristics, it is often difficult to distinguish buildings from other man-made constructions, like roads and bridges, during the change detection procedure. Therefore, stereo imagery is of importance to provide the height component which is very helpful in analyzing 3-D building changes. In this paper, we propose a change detection method based on stereo imagery and digital surface models (DSMs) generated with stereo matching methodology and provide a solution by the joint use of height changes and Kullback-Leibler divergence similarity measure between the original images. The Dempster-Shafer fusion theory is adopted to combine these two change indicators to improve the accuracy. In addition, vegetation and shadow classifications are used as no-building change indicators for refining the change detection results. In the end, an object-based building extraction method based on shape features is performed. For evaluation purpose, the proposed method is applied in two test areas, one is in an industrial area in Korea with stereo imagery from the same sensor and the other represents a dense urban area in Germany using stereo imagery from different sensors with different resolutions. Our experimental results confirm the efficiency and high accuracy of the proposed methodology even for different kinds and combinations of stereo images and consequently different DSM qualities.",
"title": ""
},
{
"docid": "7074c90ee464e4c1d0e3515834835817",
"text": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
"title": ""
},
{
"docid": "ce463006a11477c653c15eb53f673837",
"text": "This paper presents a meaning-based statistical math word problem (MWP) solver with understanding, reasoning and explanation. It comprises a web user interface and pipelined modules for analysing the text, transforming both body and question parts into their logic forms, and then performing inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating the extracted math quantity with its associated syntactic and semantic information (which specifies the physical meaning of that quantity). Those role-tags are then used to identify the desired operands and filter out irrelevant quantities (so that the answer can be obtained precisely). Since the physical meaning of each quantity is explicitly represented with those role-tags and used in the inference process, the proposed approach could explain how the answer is obtained in a human comprehensible way.",
"title": ""
},
{
"docid": "29eb0f70a0d997a784831149283a6eb9",
"text": "– The techniques of artificial intelligence based in fuzzy logic and neural networks are frequently applied together. The reasons to combine these two paradigms come out of the difficulties and inherent limitations of each isolated paradigm. Generically, when they are used in a combined way, they are called Neuro-Fuzzy Systems. This term, however, is often used to assign a specific type of system that integrates both techniques. This type of system is characterised by a fuzzy system where fuzzy sets and fuzzy rules are adjusted using input output patterns. There are several different implementations of neuro-fuzzy systems, where each author defined its own model. This article summarizes a general vision of the area describing the most known hybrid neuro-fuzzy techniques, its advantages and disadvantages.",
"title": ""
},
{
"docid": "3f2081f9c1cf10e9ec27b2541f828320",
"text": "As the heart of an aircraft, the aircraft engine's condition directly affects the safety, reliability, and operation of the aircraft. Prognostics and health management for aircraft engines can provide advance warning of failure and estimate the remaining useful life. However, aircraft engine systems are complex with both intangible and uncertain factors, it is difficult to model the complex degradation process, and no single prognostic approach can effectively solve this critical and complicated problem. Thus, fusion prognostics is conducted to obtain more accurate prognostics results. In this paper, a prognostics and health management-oriented integrated fusion prognostic framework is developed to improve the system state forecasting accuracy. This framework strategically fuses the monitoring sensor data and integrates the strengths of the data-driven prognostics approach and the experience-based approach while reducing their respective limitations. As an application example, this developed fusion prognostics framework is employed to predict the remaining useful life of an aircraft gas turbine engine based on sensor data. The results demonstrate that the proposed fusion prognostics framework is an effective prognostics tool, which can provide a more accurate and robust remaining useful life estimation than any single prognostics method.",
"title": ""
},
{
"docid": "78c6ec58cec2607d5111ee415d683525",
"text": "Forty-three normal hearing participants were tested in two experiments, which focused on temporal coincidence in auditory visual (AV) speech perception. In these experiments, audio recordings of/pa/and/ba/were dubbed onto video recordings of /ba/or/ga/, respectively (ApVk, AbVg), to produce the illusory \"fusion\" percepts /ta/, or /da/ [McGurk, H., & McDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-747]. In Experiment 1, an identification task using McGurk pairs with asynchronies ranging from -467 ms (auditory lead) to +467 ms was conducted. Fusion responses were prevalent over temporal asynchronies from -30 ms to +170 ms and more robust for audio lags. In Experiment 2, simultaneity judgments for incongruent and congruent audiovisual tokens (AdVd, AtVt) were collected. McGurk pairs were more readily judged as asynchronous than congruent pairs. Characteristics of the temporal window over which simultaneity and fusion responses were maximal were quite similar, suggesting the existence of a 200 ms duration asymmetric bimodal temporal integration window.",
"title": ""
},
{
"docid": "7c482427e4f0305c32210093e803eb78",
"text": "A healable transparent capacitive touch screen sensor has been fabricated based on a healable silver nanowire-polymer composite electrode. The composite electrode features a layer of silver nanowire percolation network embedded into the surface layer of a polymer substrate comprising an ultrathin soldering polymer layer to confine the nanowires to the surface of a healable Diels-Alder cycloaddition copolymer and to attain low contact resistance between the nanowires. The composite electrode has a figure-of-merit sheet resistance of 18 Ω/sq with 80% transmittance at 550 nm. A surface crack cut on the conductive surface with 18 Ω is healed by heating at 100 °C, and the sheet resistance recovers to 21 Ω in 6 min. A healable touch screen sensor with an array of 8×8 capacitive sensing points is prepared by stacking two composite films patterned with 8 rows and 8 columns of coupling electrodes at 90° angle. After deliberate damage, the coupling electrodes recover touch sensing function upon heating at 80 °C for 30 s. A capacitive touch screen based on Arduino is demonstrated capable of performing quick recovery from malfunction caused by a razor blade cutting. After four cycles of cutting and healing, the sensor array remains functional.",
"title": ""
}
] |
scidocsrr
|
0fd9dbe67b0f4c11508965f4a5b1f2cf
|
A decision support system for predicting students ’ performance
|
[
{
"docid": "de1fe89adbc6e4a8993eb90cae39d97e",
"text": "Decision trees have proved to be valuable tools for the description, classification and generalization of data. Work on constructing decision trees from data exists in multiple disciplines such as statistics, pattern recognition, decision theory, signal processing, machine learning and artificial neural networks. Researchers in these disciplines, sometimes working on quite different problems, identified similar issues and heuristics for decision tree construction. This paper surveys existing work on decision tree construction, attempting to identify the important issues involved, directions the work has taken and the current state of the art.",
"title": ""
}
] |
[
{
"docid": "8b822d4c223d38d32c374079952f57b5",
"text": "Female online shoppers: examining the mediating roles of e-satisfaction and e-trust on e-loyalty development Shihyu Chou Chi-Wen Chen Jiun-You Lin Article information: To cite this document: Shihyu Chou Chi-Wen Chen Jiun-You Lin , (2015),\"Female online shoppers: examining the mediating roles of e-satisfaction and e-trust on e-loyalty development\", Internet Research, Vol. 25 Iss 4 pp. Permanent link to this document: http://dx.doi.org/10.1108/IntR-01-2014-0006",
"title": ""
},
{
"docid": "9df8e744b6a82875f3d25ce42224e609",
"text": "In this study, we investigated the neural substrate involved in the comprehension of novel metaphoric sentences by comparing the findings to those obtained with literal and anomalous sentences using event-related functional magnetic resonance imaging (fMRI). Stimuli consisted of 63 copula sentences (\"An A is a B\") in Japanese with metaphorical, literal, or anomalous meanings. Thirteen normal participants read these sentences silently and responded as to whether or not they could understand the meaning of each sentence. When participants read metaphoric sentences in contrast to literal sentences, higher activation was seen in the left medial frontal cortex (MeFC: Brodmann's area (BA) 9/10), the left superior frontal cortex (SFC: BA 9), and the left inferior frontal cortex (IFC: BA 45). The opposite contrast (literal sentences in contrast to metaphoric sentences) gave higher activation in the precuneus (BA 7) and the right middle and SFC (BA 8/9). These findings suggest that metaphor comprehension is involved in specific neural mechanisms of semantic and pragmatic processing which differ from those in literal comprehension. Especially, our results suggest that activation in the left IFC reflects the semantic processing and that activation in the MeFC reflects the process of inference for metaphorical interpretation to establish semantic coherence.",
"title": ""
},
{
"docid": "ca8517f04ef743a4ade4cdbdb8f21db7",
"text": "UASNs are widely used in many applications, and many studies have been conducted. However, most current research projects have not taken network security into consideration, despite the fact that a UASN is typically vulnerable to malicious attacks due to the unique characteristics of an underwater acoustic communication channel (e.g., low communication bandwidth, long propagation delays, and high bit error rates). In addition, the significant differences between UASNs and terrestrial wireless sensor networks entail the urgent and rapid development of secure communication mechanisms for underwater sensor nodes. For the above mentioned reasons, this article aims to present a somewhat comprehensive survey of the emerging topics arising from secure communications in UASNs, which naturally lead to a great number of open research issues outlined afterward.",
"title": ""
},
{
"docid": "cf2c8ab1b22ae1a33e9235a35f942e7e",
"text": "Adversarial attacks against neural networks are a problem of considerable importance, for which effective defenses are not yet readily available. We make progress toward this problem by showing that non-negative weight constraints can be used to improve resistance in specific scenarios. In particular, we show that they can provide an effective defense for binary classification problems with asymmetric cost, such as malware or spam detection. We also show the potential for non-negativity to be helpful to non-binary problems by applying it to image",
"title": ""
},
{
"docid": "327d071f71bf39bcd171f85746047a02",
"text": "Advances in information and communication technologies have led to the emergence of Internet of Things (IoT). In the healthcare environment, the use of IoT technologies brings convenience to physicians and patients as they can be applied to various medical areas (such as constant real-time monitoring, patient information management, medical emergency management, blood information management, and health management). The radio-frequency identification (RFID) technology is one of the core technologies of IoT deployments in the healthcare environment. To satisfy the various security requirements of RFID technology in IoT, many RFID authentication schemes have been proposed in the past decade. Recently, elliptic curve cryptography (ECC)-based RFID authentication schemes have attracted a lot of attention and have been used in the healthcare environment. In this paper, we discuss the security requirements of RFID authentication schemes, and in particular, we present a review of ECC-based RFID authentication schemes in terms of performance and security. Although most of them cannot satisfy all security requirements and have satisfactory performance, we found that there are three recently proposed ECC-based authentication schemes suitable for the healthcare environment in terms of their performance and security.",
"title": ""
},
{
"docid": "5673fc81ba9a1d26531bcf7a1572e873",
"text": "Spatio-temporal channel information obtained via channel sounding is invaluable for implementing equalizers, multi-antenna systems, and dynamic modulation schemes in next-generation wireless systems. The most straightforward means of performing channel measurements is in the frequency domain using a vector network analyzer (VNA). However, the high cost of VNAs often leads engineers to seek more economical solutions by measuring the wireless channel in the time domain. The bandwidth compression of the sliding correlator channel sounder makes it the preferred means of performing time-domain channel measurements.",
"title": ""
},
{
"docid": "fadd3a1f223af4da6639730d8aec271c",
"text": "Reservoir computing has emerged in the last decade as an alternative to gradient descent methods for training recurrent neural networks. Echo State Network (ESN) is one of the key reservoir computing “flavors”. While being practical, conceptually simple, and easy to implement, ESNs require some experience and insight to achieve the hailed good performance in many tasks. Here we present practical techniques and recommendations for successfully applying ESNs, as well as some more advanced application-specific modifications. To appear in Neural Networks: Tricks of the Trade, Reloaded. G. Montavon, G. B. Orr, and K.-R. Müller, editors, Springer, 2012.",
"title": ""
},
{
"docid": "85fb2cb99e5320ddde182d6303164da8",
"text": "The uncertainty about whether, in China, the genus Melia (Meliaceae) consists of one species (M. azedarach Linnaeus) or two species (M. azedarach and M. toosendan Siebold & Zuccarini) remains to be clarified. Although the two putative species are morphologically distinguishable, genetic evidence supporting their taxonomic separation is lacking. Here, we investigated the genetic diversity and population structure of 31 Melia populations across the natural distribution range of the genus in China. We used sequence-related amplified polymorphism (SRAP) markers and obtained 257 clearly defined bands amplified by 20 primers from 461 individuals. The polymorphic loci (P) varied from 35.17% to 76.55%, with an overall mean of 58.24%. Nei’s gene diversity (H) ranged from 0.13 to 0.31, with an overall mean of 0.20. Shannon’s information index (I) ranged from 0.18 to 0.45, with an average of 0.30. The genetic diversity of the total population (Ht) and within populations (Hs) was 0.37 ̆ 0.01 and 0.20 ̆ 0.01, respectively. Population differentiation was substantial (Gst = 0.45), and gene flow was low. Of the total variation, 31.41% was explained by differences among putative species, 19.17% among populations within putative species, and 49.42% within populations. Our results support the division of genus Melia into two species, which is consistent with the classification based on the morphological differentiation.",
"title": ""
},
{
"docid": "c35608f769b7844adc482ff9f7a79278",
"text": "Video annotation is an effective way to facilitate content-based analysis for videos. Automatic machine learning methods are commonly used to accomplish this task. Among these, active learning is one of the most effective methods, especially when the training data cost a great deal to obtain. One of the most challenging problems in active learning is the sample selection. Various sampling strategies can be used, such as uncertainty, density, and diversity, but it is difficult to strike a balance among them. In this paper, we provide a visualization-based batch mode sampling method to handle such a problem. An iso-contour-based scatterplot is used to provide intuitive clues for the representativeness and informativeness of samples and assist users in sample selection. A semisupervised metric learning method is incorporated to help generate an effective scatterplot reflecting the high-level semantic similarity for visual sample selection. Moreover, both quantitative and qualitative evaluations are provided to show that the visualization-based method can effectively enhance sample selection in active learning.",
"title": ""
},
{
"docid": "3230fba68358a08ab9112887bdd73bb9",
"text": "The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.",
"title": ""
},
{
"docid": "8d7a41aad86633c9bb7da8adfde71883",
"text": "Nuclear receptors (NRs) are major pharmacological targets that allow an access to the mechanisms controlling gene regulation. As such, some NRs were identified as biological targets of active compounds contained in herbal remedies found in traditional medicines. We aim here to review this expanding literature by focusing on the informative articles regarding the mechanisms of action of traditional Chinese medicines (TCMs). We exemplified well-characterized TCM action mediated by NR such as steroid receptors (ER, GR, AR), metabolic receptors (PPAR, LXR, FXR, PXR, CAR) and RXR. We also provided, when possible, examples from other traditional medicines. From these, we draw a parallel between TCMs and phytoestrogens or endocrine disrupting chemicals also acting via NR. We define common principle of action and highlight the potential and limits of those compounds. TCMs, by finely tuning physiological reactions in positive and negative manners, could act, in a subtle but efficient way, on NR sensors and their transcriptional network.",
"title": ""
},
{
"docid": "73545ef815fb22fa048fed3e0bc2cc8b",
"text": "Redox-based resistive switching devices (ReRAM) are an emerging class of nonvolatile storage elements suited for nanoscale memory applications. In terms of logic operations, ReRAM devices were suggested to be used as programmable interconnects, large-scale look-up tables or for sequential logic operations. However, without additional selector devices these approaches are not suited for use in large scale nanocrossbar memory arrays, which is the preferred architecture for ReRAM devices due to the minimum area consumption. To overcome this issue for the sequential logic approach, we recently introduced a novel concept, which is suited for passive crossbar arrays using complementary resistive switches (CRSs). CRS cells offer two high resistive storage states, and thus, parasitic “sneak” currents are efficiently avoided. However, until now the CRS-based logic-in-memory approach was only shown to be able to perform basic Boolean logic operations using a single CRS cell. In this paper, we introduce two multi-bit adder schemes using the CRS-based logic-in-memory approach. We proof the concepts by means of SPICE simulations using a dynamical memristive device model of a ReRAM cell. Finally, we show the advantages of our novel adder concept in terms of step count and number of devices in comparison to a recently published adder approach, which applies the conventional ReRAM-based sequential logic concept introduced by Borghetti et al.",
"title": ""
},
{
"docid": "5f0157139bff33057625686b7081a0c8",
"text": "A novel MIC/MMIC compatible microstrip to waveguide transition for X band is presented. The transition has realized on novel low cost substrate and its main features are: wideband operation, low insertion loss and feeding without a balun directly by the microstrip line.",
"title": ""
},
{
"docid": "5034984717b3528f7f47a1f88a3b1310",
"text": "ALL RIGHTS RESERVED. This document contains material protected under International and Federal Copyright Laws and Treaties. Any unauthorized reprint or use of this material is prohibited. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without express written permission from the author / publisher.",
"title": ""
},
{
"docid": "6992e0712e99e11b9ebe862c01c0882b",
"text": "This paper is in many respects a continuation of the earlier paper by the author published in Proc. R. Soc. A in 1998 entitled ‘A comprehensive methodology for the design of ships (and other complex systems)’. The earlier paper described the approach to the initial design of ships developedby the author during some 35years of design practice, including two previous secondments to teach ship design atUCL.Thepresent paper not only takes thatdevelopment forward, it also explains how the research tool demonstrating the author’s approach to initial ship design has now been incorporated in an industry based design system to provide a working graphically and numerically integrated design system. This achievement is exemplified by a series of practical design investigations, undertaken by the UCL Design Research Centre led by the author, which were mainly undertaken for industry clients in order to investigate real problems towhich the approachhasbrought significant insights.The other new strand in the present paper is the emphasis on the human factors or large scale ergonomics dimension, vital to complex and large scale design products but rarely hitherto beengiven sufficientprominence in the crucial formative stagesof large scale designbecauseof the inherent difficulties in doing so. The UCL Design Building Block approach has now been incorporated in the established PARAMARINE ship design system through a module entitled SURFCON. Work is now underway on an Engineering and Physical Sciences Research Council joint project with the University of Greenwich to interface the latter’s escape simulation toolmaritimeEXODUSwithSURFCONtoprovide initial design guidance to ship designers on personnelmovement. The paper’s concluding section considers the wider applicability of the integration of simulation during initial design with the graphically driven synthesis to other complex and large scale design tasks. The paper concludes by suggesting how such an approach to complex design can contribute to the teaching of designers and, moreover, how this designapproach can enable a creative qualitative approach to engineering design to be sustained despite the risk that advances in computer based methods might encourage emphasis being accorded to solely to quantitative analysis.",
"title": ""
},
{
"docid": "c9c440243d8a247f2daa9d0dbe3f478b",
"text": "Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier system where data bits are encoded to multiple subcarriers, while being sent simultaneously. This results in the optimal usage of bandwidth. A set of orthogonal sub-carriers together forms an OFDM symbol. To avoid ISI due to multi-path, successive OFDM symbols are separated by guard band. This makes the OFDM system resistant to multipath effects. The principles of OFDM modulation have been around since 1960s. However, recently, the attention toward OFDM has grown dramatically in the field of wireless and wired communication systems. This is reflected by the adoption of this technique in applications such as digital audio/video broadcast (DAB/DVB), wireless LAN (802.11a and HiperLAN2), broadband wireless (802.16) and xDSL. In this work, a pure VHDL design, integrated with some intellectual property (IP) blocks, is employed to implement an OFDM transmitter and receiver. In this paper design of OFDM system using IFFT and FFT blocks has been introduced and simulation was done on XILINX ISE 14.2 software. Keywords– FFT, IFFT, OFDM, QAM, VHDL.",
"title": ""
},
{
"docid": "ed0444685c9a629c7d1fda7c4912fd55",
"text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.",
"title": ""
},
{
"docid": "1095311bbe710412173f33134c7d47d4",
"text": "A large number of papers are appearing in the biomedical engineering literature that describe the use of machine learning techniques to develop classifiers for detection or diagnosis of disease. However, the usefulness of this approach in developing clinically validated diagnostic techniques so far has been limited and the methods are prone to overfitting and other problems which may not be immediately apparent to the investigators. This commentary is intended to help sensitize investigators as well as readers and reviewers of papers to some potential pitfalls in the development of classifiers, and suggests steps that researchers can take to help avoid these problems. Building classifiers should be viewed not simply as an add-on statistical analysis, but as part and parcel of the experimental process. Validation of classifiers for diagnostic applications should be considered as part of a much larger process of establishing the clinical validity of the diagnostic technique.",
"title": ""
},
{
"docid": "a93351d3fb9dc69868a11c8655ec1541",
"text": "Dry powder inhaler formulations comprising commercial lactose–drug blends can show restricted detachment of drug from lactose during aerosolisation, which can lead to poor fine particle fractions (FPFs) which are suboptimal. The aim of the present study was to investigate whether the crystallisation of lactose from different ethanol/butanol co-solvent mixtures could be employed as a method of altering the FPF of salbutamol sulphate from powder blends. Lactose particles were prepared by an anti-solvent recrystallisation process using various ratios of the two solvents. Crystallised lactose or commercial lactose was mixed with salbutamol sulphate and in vitro deposition studies were performed using a multistage liquid impinger. Solid-state characterisation results showed that commercial lactose was primarily composed of the α-anomer whilst the crystallised lactose samples comprised a α/β mixture containing a lower number of moles of water per mole of lactose compared to the commercial lactose. The crystallised lactose particles were also less elongated and more irregular in shape with rougher surfaces. Formulation blends containing crystallised lactose showed better aerosolisation performance and dose uniformity when compared to commercial lactose. The highest FPF of salbutamol sulphate (38.0 ± 2.5%) was obtained for the lactose samples that were crystallised from a mixture of ethanol/butanol (20:60) compared to a FPF of 19.7 ± 1.9% obtained for commercial lactose. Engineered lactose carriers with modified anomer content and physicochemical properties, when compared to the commercial grade, produced formulations which generated a high FPF.",
"title": ""
},
{
"docid": "a2c4320defa1b3b708f04f37ad6a994a",
"text": "With the rapid application growing of internet and wireless network, information security becomes significant to protect commerce secret and privacy. Encryption algorithm plays an important role for information security guarantee. In this paper, we evaluate the performance of two symmetric key encryption algorithms: DES and Blowfish which commonly used for network data encryption. In this paper, we analyzed encryption security, evaluated encryption speed and power consumption for both algorithms. Experimental results show that Blowfish algorithm runs faster than DES, while the power consumption is almost the same. It is proved that the Blowfish encryption algorithm maybe more suitable for wireless network application security.",
"title": ""
}
] |
scidocsrr
|
dbedb01064e0ef20fc79dd56a8fe1530
|
A Virtualized Separation Kernel for Mixed-Criticality Systems
|
[
{
"docid": "fbfb6b7cb2dc3e774197c470c55a928b",
"text": "The integrated modular avionics (IMA) architectures have ushered in a new wave of thought regarding avionics integration. IMA architectures utilize shared, configurable computing, communication, and I/O resources. These architectures allow avionics system integrators to benefit from increased system scalability, as well as from a form of platform management that reduces the workload for aircraft-level avionics integration activities. In order to realize these architectural benefits, the avionics suppliers must engage in new philosophies for sharing a set of system-level resources that are managed a level higher than each individual avionics system. The mechanisms for configuring and managing these shared intersystem resources are integral to managing the increased level of avionics integration that is inherent to the IMA architectures. This paper provides guidance for developing the methodology and tools to efficiently manage the set of shared intersystem resources. This guidance is based upon the author's experience in developing the Genesis IMA architecture at Smiths Aerospace. The Genesis IMA architecture was implemented on the Boeing 787 Dreamliner as the common core system (CCS)",
"title": ""
},
{
"docid": "6c018b35bf2172f239b2620abab8fd2f",
"text": "Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs -- a major concern for organizations wishing to move to the cloud. In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with cur- rent commodity hardware, NoHype is a significant advance in the security of cloud computing.",
"title": ""
}
] |
[
{
"docid": "63ef3d1963a60437543f0c125307194f",
"text": "In this paper, we discuss theoretical foundations and a practical realization of a real-time traffic sign detection, tracking and recognition system operating on board of a vehicle. In the proposed framework, a generic detector refinement procedure based on mean shift clustering is introduced. This technique is shown to improve the detection accuracy and reduce the number of false positives for a broad class of object detectors for which a soft response’s confidence can be sensibly estimated. The track of an already established candidate is maintained over time using an instance-specific tracking function that encodes the relationship between a unique feature representation of the target object and the affine distortions it is subject to. We show that this function can be learned on-the-fly via regression from random transformations applied to the image of the object in known pose. Secondly, we demonstrate its capability of reconstructing the full-face view of a sign from substantial view angles. In the recognition stage, a concept of class similarity measure learned from image pairs is discussed and its realization using SimBoost, a novel version of AdaBoost algorithm, is analyzed. Suitability of the proposed method for solving multi-class traffic sign classification problems is shown experimentally for different feature representations of an image. Overall performance of our system is evaluated based on a prototype C++ implementation. Illustrative output generated by this demo application is provided as a supplementary material attached to this paper.",
"title": ""
},
{
"docid": "b168f298448b3ba16b7f585caae7baa6",
"text": "Not only how good or bad people feel on average, but also how their feelings fluctuate across time is crucial for psychological health. The last 2 decades have witnessed a surge in research linking various patterns of short-term emotional change to adaptive or maladaptive psychological functioning, often with conflicting results. A meta-analysis was performed to identify consistent relationships between patterns of short-term emotion dynamics-including patterns reflecting emotional variability (measured in terms of within-person standard deviation of emotions across time), emotional instability (measured in terms of the magnitude of consecutive emotional changes), and emotional inertia of emotions over time (measured in terms of autocorrelation)-and relatively stable indicators of psychological well-being or psychopathology. We determined how such relationships are moderated by the type of emotional change, type of psychological well-being or psychopathology involved, valence of the emotion, and methodological factors. A total of 793 effect sizes were identified from 79 articles (N = 11,381) and were subjected to a 3-level meta-analysis. The results confirmed that overall, low psychological well-being co-occurs with more variable (overall ρ̂ = -.178), unstable (overall ρ̂ = -.205), but also more inert (overall ρ̂ = -.151) emotions. These effect sizes were stronger when involving negative compared with positive emotions. Moreover, the results provided evidence for consistency across different types of psychological well-being and psychopathology in their relation with these dynamical patterns, although specificity was also observed. The findings demonstrate that psychological flourishing is characterized by specific patterns of emotional fluctuations across time, and provide insight into what constitutes optimal and suboptimal emotional functioning. (PsycINFO Database Record",
"title": ""
},
{
"docid": "92cc82dc2df876c27eda5cbf8bd4fcac",
"text": "This article analyzes the inquisitorial trial of Maria Duran, a Catalan novice in the Dominican convent of Nossa Senhora do Paraíso in Portugal. Maria Duran was arrested by the Inquisition in 1741 and, after a lengthy trial, condemned in 1744 to a public lashing and exile. She was suspected of having made a pact with the Devil and was accused by many female witnesses of possessing a \"secret penis\" that she had allegedly used in her amorous relations with fellow nuns and novices. Her voluminous trial dossier offers a rare and fascinating documentary insight into the often extreme reactions that female homosexuality provoked from both men and women in early modern Portugal. Using the evidence offered by the 18th-century trial of Maria Duran, this article highlights female bewilderment when faced with female-on-female sexual violence and the difficulty that men (in this case, churchmen) had coming to terms with the existence of female homosexuality. It also discusses the case in light of the acts/identity debate among historians of the history of sexuality.",
"title": ""
},
{
"docid": "62769e2979d1a1181ffebedc18f3783a",
"text": "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the transhumanist dogma that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. Preliminaries Substrate-independence is a common assumption in the philosophy of mind. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium; silicon-based processors inside a computer could in principle do the trick as well. Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall take it as a given here. The argument we shall present does not, however, depend on any strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true analytic (either analytically or metaphysically) just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations (including passing Turing tests etc.). We only need the weaker assumption that it would suffice (for generation of subjective experiences) if the computational processes of a human brain were structurally replicated in suitably fine-grained detail, such as on the level of individual neurons. This highly attenuated version of substrate-independence is widely accepted. At the current stage of technology, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Several authors argue that this stage may be only a few decades away (Drexler 1985; Bostrom 1998; Kurzweil 1999; Moravec 1999). Yet for present purposes we need not make any assumptions about the time-scale. The argument we shall present works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints. Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. Since we are still lacking a “theory of everything”, we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those theoretical constraints’ that in our current understanding limit the information processing density that can be attained in a given lump of matter. But we can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10 instructions per second (Drexler 1992). Another author gives a rough performance estimate of 10 Are You Living In a Computer Simulation? 2 operations per second for a computer with a mass on order of large planet (Bradbury 2000). The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we already understand (contrast enhancement in the retina), yields a figure of ~10 operations per second for the entire human brain (Moravee 1989). An alternative estimate, based the number of synapses in the brain and their firing frequency gives a figure of ~10l6-10l7 operations per second (Bostrom 1998). Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dentritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its components. One would therefore expect a substantial increase in efficiency when using more reliable and versatile non-biological processors. If the environment is included in the simulation, this will require additional computing power. How much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible (unless radically new physics is discovered). But in order to get a realistic simulation of human experience, much less is needed — only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations indeed: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated. Microscopic phenomena could likely be filled in on an ad hoc basis. What you see when you look in an electron microscope needs to look unsuspicious, but you have usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we set up systems that are designed to harness unobserved microscopic phenomena operating according to known principles to get results that we are able to independently verify. The paradigmatic instance is computers. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. But this is no big problem, since our current computing power is negligible by posthuman standards. In general, the posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times, Thus, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director can skip back a few seconds and rerun the simulation in a way that avoids the problem. It thus seems plausible that the main computational cost consists in simulating organic brains down to the neuronal or sub-neuronal level (although as we build more and faster computers, the cost of simulating our machines might eventually come to dominate the cost of simulating nervous systems). While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use,~1O-10 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for the argument we are pursuing here. We noted that a rough approximation of the computational power of a single planetary-mass computer is 10 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. Such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) in less than 10 seconds. (A posthuman civilization may eventually build an astronomical number of such computers.) We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error 3 Bostrom (2001): Are You Living In a Computer Simulation? in all our guesstimates. • Posthuman civilizations would have enough computing power to run hugely many ancestorsimulations even while using only a tiny fraction of their resources for that purpose. The Simulation Argument The core of the argument that this paper presents can be expressed roughly as follows: If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestorsimulations, then how come you are not living in such a simulation? We shall develop this idea into a rigorous argument. Let us introduce the following notation: DOOM: Humanity goes extinct before reaching the posthuman stage SIM: You are living in a simulation N: Average number of ancestor-simulations run by a posthuman civilization H: Average number of individuals that have lived in a civilization before it reaches a posthuman stage The expected fraction of all observers with human-type experiences that live in simulations is then fsim= [l-P(DOOM)]×N×H ([1-P(DOOM)]×N×H",
"title": ""
},
{
"docid": "dab0fdbd380e35ba7b5f2d026609f797",
"text": "Lifestyle intervention can be effective when treating non-alcoholic fatty liver diseases (NAFLD) patients. Weight loss decreases cardiovascular and diabetes risk and can also regress liver disease. Weight reductions of ⩾10% can induce a near universal non-alcoholic steatohepatitis resolution and fibrosis improvement by at least one stage. However, modest weight loss (>5%) can also produce important benefits on the components of the NAFLD activity score (NAS). Additionally, we need to explore the role of total calories and type of weight loss diet, micro- and macronutrients, evidence-based benefits of physical activity and exercise and finally support these modifications through established behavioural change models and techniques for long-term maintenance of lifestyle modifications. Following a Mediterranean diet can reduce liver fat even without weight loss and is the most recommended dietary pattern for NAFLD. The Mediterranean diet is characterised by reduced carbohydrate intake, especially sugars and refined carbohydrates (40% of the calories vs. 50-60% in a typical low fat diet), and increased monounsaturated and omega-3 fatty acid intake (40% of the calories as fat vs. up-to 30% in a typical low fat diet). Both TV sitting (a reliable marker of overall sedentary behaviour) and physical activity are associated with cardio-metabolic health, NAFLD and overall mortality. A 'triple hit behavioural phenotype' of: i) sedentary behaviour, ii) low physical activity, and iii) poor diet have been defined. Clinical evidence strongly supports the role of lifestyle modification as a primary therapy for the management of NAFLD and NASH. This should be accompanied by the implementation of strategies to avoid relapse and weight regain.",
"title": ""
},
{
"docid": "3e5312f6d3c02d8df2903ea80c1bbae5",
"text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.",
"title": ""
},
{
"docid": "e4db0ee5c4e2a5c87c6d93f2f7536f15",
"text": "Despite the importance of sparsity in many big data applications, there are few existing methods for efficient distributed optimization of sparsely-regularized objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in distributed environments. By taking a nontraditional view of classical objectives as part of a more general primal-dual setting, we obtain a new class of methods that can be efficiently distributed and is applicable to common L1-regularized regression and classification objectives, such as Lasso, sparse logistic regression, and elastic net regression. We provide convergence guarantees for this framework and demonstrate strong empirical performance as compared to other stateof-the-art methods on several real-world distributed datasets.",
"title": ""
},
{
"docid": "b4ecf497c8240a48a6e60aef400d0e1e",
"text": "Skin color diversity is the most variable and noticeable phenotypic trait in humans resulting from constitutive pigmentation variability. This paper will review the characterization of skin pigmentation diversity with a focus on the most recent data on the genetic basis of skin pigmentation, and the various methodologies for skin color assessment. Then, melanocyte activity and amount, type and distribution of melanins, which are the main drivers for skin pigmentation, are described. Paracrine regulators of melanocyte microenvironment are also discussed. Skin response to sun exposure is also highly dependent on color diversity. Thus, sensitivity to solar wavelengths is examined in terms of acute effects such as sunburn/erythema or induced-pigmentation but also long-term consequences such as skin cancers, photoageing and pigmentary disorders. More pronounced sun-sensitivity in lighter or darker skin types depending on the detrimental effects and involved wavelengths is reviewed.",
"title": ""
},
{
"docid": "82917c4e6fb56587cc395078c14f3bb7",
"text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.",
"title": ""
},
{
"docid": "363a465d626fec38555563722ae92bb1",
"text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "e94183f4200b8c6fef1f18ec0e340869",
"text": "Hoon Sohn Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C926 Los Alamos National Laboratory, Los Alamos, NM 87545 e-mail: sohn@lanl.gov Charles R. Farrar Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C946 e-mail: farrar@lanl.gov Norman F. Hunter Engineering Sciences & Applications Division, Measurement Technology Group, M/S C931 e-mail: hunter@lanl.gov Keith Worden Department of Mechanical Engineering University of Sheffield Mappin St. Sheffield S1 3JD, United Kingdom e-mail: k.worden@sheffield.ac.uk",
"title": ""
},
{
"docid": "45c8f409a5783067b6dce332500d5a88",
"text": "An online learning community enables learners to access up-to-date information via the Internet anytime–anywhere because of the ubiquity of the World Wide Web (WWW). Students can also interact with one another during the learning process. Hence, researchers want to determine whether such interaction produces learning synergy in an online learning community. In this paper, we take the Technology Acceptance Model as a foundation and extend the external variables as well as the Perceived Variables as our model and propose a number of hypotheses. A total of 436 Taiwanese senior high school students participated in this research, and the online learning community focused on learning English. The research results show that all the hypotheses are supported, which indicates that the extended variables can effectively predict whether users will adopt an online learning community. Finally, we discuss the implications of our findings for the future development of online English learning communities. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6df61e330f6b71c4ef136e3a2220a5e2",
"text": "In recent years, we have seen significant advancement in technologies to bring about smarter cities worldwide. The interconnectivity of things is the key enabler in these initiatives. An important building block is smart mobility, and it revolves around resolving land transport challenges in cities with dense populations. A transformative direction that global stakeholders are looking into is autonomous vehicles and the transport infrastructure to interconnect them to the traffic management system (that is, vehicle to infrastructure connectivity), as well as to communicate with one another (that is, vehicle to vehicle connectivity) to facilitate better awareness of road conditions. A number of countries had also started to take autonomous vehicles to the roads to conduct trials and are moving towards the plan for larger scale deployment. However, an important consideration in this space is the security of the autonomous vehicles. There has been an increasing interest in the attacks and defences of autonomous vehicles as these vehicles are getting ready to go onto the roads. In this paper, we aim to organize and discuss the various methods of attacking and defending autonomous vehicles, and propose a comprehensive attack and defence taxonomy to better categorize each of them. Through this work, we hope that it provides a better understanding of how targeted defences should be put in place for targeted attacks, and for technologists to be more mindful of the pitfalls when developing architectures, algorithms and protocols, so as to realise a more secure infrastructure composed of dependable autonomous vehicles.",
"title": ""
},
{
"docid": "9d9b217478829c467b5c2c745de9af9e",
"text": "How does one recognize a \"school\" of thought ? And why should one? These are questions that, concerning a truly distinctive and now distinguished intellectual trend originating in Toronto, I have entertained since the death of Marshall McLuhan on the last day of 1980. At the time I was impressed by the fact that Harold Innis, Eric Havelock and McLuhan, the three main scholars who taught that communication systems create definite psychological and social \"states\", had all been at the university of Toronto. The most significant common thread was that all three had explored different implications of ancient Greek literacy to support their theoretical approach. Even if they had not directly collaborated with each other, they had known each other's work and been inspired by common perceptions.",
"title": ""
},
{
"docid": "0ef2c10b511454cc4432217062e8f50d",
"text": "Non-volatile memory (NVM) is a new storage technology that combines the performance and byte addressability of DRAM with the persistence of traditional storage devices like flash (SSD). While these properties make NVM highly promising, it is not yet clear how to best integrate NVM into the storage layer of modern database systems. Two system designs have been proposed. The first is to use NVM exclusively, i.e., to store all data and index structures on it. However, because NVM has a higher latency than DRAM, this design can be less efficient than main-memory database systems. For this reason, the second approach uses a page-based DRAM cache in front of NVM. This approach, however, does not utilize the byte addressability of NVM and, as a result, accessing an uncached tuple on NVM requires retrieving an entire page.\n In this work, we evaluate these two approaches and compare them with in-memory databases as well as more traditional buffer managers that use main memory as a cache in front of SSDs. This allows us to determine how much performance gain can be expected from NVM. We also propose a lightweight storage manager that simultaneously supports DRAM, NVM, and flash. Our design utilizes the byte addressability of NVM and uses it as an additional caching layer that improves performance without losing the benefits from the even faster DRAM and the large capacities of SSDs.",
"title": ""
},
{
"docid": "7bef0f8e1df99d525f3d2356bd129e45",
"text": "The term 'participation' is traditionally used in HCI to describe the involvement of users and stakeholders in design processes, with a pretext of distributing control to participants to shape their technological future. In this paper we ask whether these values can hold up in practice, particularly as participation takes on new meanings and incorporates new perspectives. We argue that much HCI research leans towards configuring participation. In exploring this claim we explore three questions that we consider important for understanding how HCI configures participation; Who initiates, directs and benefits from user participation in design? In what forms does user participation occur? How is control shared with users in design? In answering these questions we consider the conceptual, ethical and pragmatic problems this raises for current participatory HCI research. Finally, we offer directions for future work explicitly dealing with the configuration of participation.",
"title": ""
},
{
"docid": "35e377e94b9b23283eabf141bde029a2",
"text": "We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.",
"title": ""
},
{
"docid": "27f0723e95930400d255c8cd40ea53b0",
"text": "We investigated the use of context-dependent deep neural network hidden Markov models, or CD-DNN-HMMs, to improve speech recognition performance for a better assessment of children English language learners (ELLs). The ELL data used in the present study was obtained from a large language assessment project administered in schools in a U.S. state. Our DNN-based speech recognition system, built using rectified linear units (ReLU), greatly outperformed recognition accuracy of Gaussian mixture models (GMM)-HMMs, even when the latter models were trained with eight times more data. Large improvement was observed for cases of noisy and/or unclear responses, which are common in ELL children speech. We further explored the use of content and manner-of-speaking features, derived from the speech recognizer output, for estimating spoken English proficiency levels. Experimental results show that the DNN-based recognition approach achieved 31% relative WER reduction when compared to GMM-HMMs. This further improved the quality of the extracted features and final spoken English proficiency scores, and increased overall automatic assessment performance to the human performance level, for various open-ended spoken language tasks.",
"title": ""
}
] |
scidocsrr
|
2a8be2c15aa2ccd0c22908c8e305952e
|
Whoo.ly: facilitating information seeking for hyperlocal communities using social media
|
[
{
"docid": "3a4da0cf9f4fdcc1356d25ea1ca38ca4",
"text": "Almost all of the existing work on Named Entity Recognition (NER) consists of the following pipeline stages – part-of-speech tagging, segmentation, and named entity type classification. The requirement of hand-labeled training data on these stages makes it very expensive to extend to different domains and entity classes. Even with a large amount of hand-labeled data, existing techniques for NER on informal text, such as social media, perform poorly due to a lack of reliable capitalization, irregular sentence structure and a wide range of vocabulary. In this paper, we address the lack of hand-labeled training data by taking advantage of weak super vision signals. We present our approach in two parts. First, we propose a novel generative model that combines the ideas from Hidden Markov Model (HMM) and n-gram language models into what we call an N-gram Language Markov Model (NLMM). Second, we utilize large-scale weak supervision signals from sources such as Wikipedia titles and the corresponding click counts to estimate parameters in NLMM. Our model is simple and can be implemented without the use of Expectation Maximization or other expensive iterative training techniques. Even with this simple model, our approach to NER on informal text outperforms existing systems trained on formal English and matches state-of-the-art NER systems trained on hand-labeled Twitter messages. Because our model does not require hand-labeled data, we can adapt our system to other domains and named entity classes very easily. We demonstrate the flexibility of our approach by successfully applying it to the different domain of extracting food dishes from restaurant reviews with very little extra work.",
"title": ""
},
{
"docid": "81387b0f93b68e8bd6a56a4fd81477e9",
"text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.",
"title": ""
}
] |
[
{
"docid": "7e848e98909c69378f624ce7db31dbfa",
"text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.",
"title": ""
},
{
"docid": "3d8fb085a0470b2c06336642436e9523",
"text": "The recent changes in climate have increased the importance of environmental monitoring, making it a topical and highly active research area. This field is based on remote sensing and on wireless sensor networks for gathering data about the environment. Recent advancements, such as the vision of the Internet of Things (IoT), the cloud computing model, and cyber-physical systems, provide support for the transmission and management of huge amounts of data regarding the trends observed in environmental parameters. In this context, the current work presents three different IoT-based wireless sensors for environmental and ambient monitoring: one employing User Datagram Protocol (UDP)-based Wi-Fi communication, one communicating through Wi-Fi and Hypertext Transfer Protocol (HTTP), and a third one using Bluetooth Smart. All of the presented systems provide the possibility of recording data at remote locations and of visualizing them from every device with an Internet connection, enabling the monitoring of geographically large areas. The development details of these systems are described, along with the major differences and similarities between them. The feasibility of the three developed systems for implementing monitoring applications, taking into account their energy autonomy, ease of use, solution complexity, and Internet connectivity facility, was analyzed, and revealed that they make good candidates for IoT-based solutions.",
"title": ""
},
{
"docid": "01ccb35abf3eed71191dc8638e58f257",
"text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.",
"title": ""
},
{
"docid": "13da78e7868baf04fce64ff02690b0f0",
"text": "Industrial IoT (IIoT) refers to the application of IoT in industrial management to improve the overall operational efficiency. With IIoT that accelerates the industrial automation process by enrolling thousands of IoT devices, strong security foundations are to be deployed befitting the distributed connectivity and constrained functionalities of the IoT devices. Recent years witnessed severe attacks exploiting the vulnerabilities in the devices of IIoT networks. Moreover, attackers can use the relations among the vulnerabilities to penetrate deep into the network. This paper addresses the security issues in IIoT network because of the vulnerabilities existing in its devices. As graphs are efficient in representing relations among entities, we propose a graphical model representing the vulnerability relations in the IIoT network. This helps to formulate the security issues in the network as graph-theoretic problems. The proposed model acts as a security framework for the risk assessment of the network. Furthermore, we propose a set of risk mitigation strategies to improve the overall security of the network. The strategies include detection and removal of the attack paths with high risk and low hop-length. We also discuss a method to identify the strongly connected vulnerabilities referred as hot-spots. A use-case is discussed and various security parameters are evaluated. The simulation results with graphs of different sizes and structures are presented for the performance evaluation of the proposed techniques against the changing dynamics of the IIoT networks.",
"title": ""
},
{
"docid": "8709706ffafdadfc2fb9210794dfa782",
"text": "The increasing availability and affordability of wireless building and home automation networks has increased interest in residential and commercial building energy management. This interest has been coupled with an increased awareness of the environmental impact of energy generation and usage. Residential appliances and equipment account for 30% of all energy consumption in OECD countries and indirectly contribute to 12% of energy generation related carbon dioxide (CO2) emissions (International Energy Agency, 2003). The International Energy Association also predicts that electricity usage for residential appliances would grow by 12% between 2000 and 2010, eventually reaching 25% by 2020. These figures highlight the importance of managing energy use in order to improve stewardship of the environment. They also hint at the potential gains that are available through smart consumption strategies targeted at residential and commercial buildings. The challenge is how to achieve this objective without negatively impacting people’s standard of living or their productivity. The three primary purposes of building energy management are the reduction/management of building energy use; the reduction of electricity bills while increasing occupant comfort and productivity; and the improvement of environmental stewardship without adversely affecting standards of living. Building energy management systems provide a centralized platform for managing building energy usage. They detect and eliminate waste, and enable the efficient use electricity resources. The use of widely dispersed sensors enables the monitoring of ambient temperature, lighting, room occupancy and other inputs required for efficient management of climate control (heating, ventilation and air conditioning), security and lighting systems. Lighting and HVAC account for 50% of commercial and 40% of residential building electricity expenditure respectively, indicating that efficiency improvements in these two areas can significantly reduce energy expenditure. These savings can be made through two avenues: the first is through the use of energy-efficient lighting and HVAC systems; and the second is through the deployment of energy management systems which utilize real time price information to schedule loads to minimize energy bills. The latter scheme requires an intelligent power grid or smart grid which can provide bidirectional data flows between customers and utility companies. The smart grid is characterized by the incorporation of intelligenceand bidirectional flows of information and electricity throughout the power grid. These enhancements promise to revolutionize the grid by enabling customers to not only consume but also supply power.",
"title": ""
},
{
"docid": "d34cc5c09e882c167b3ff273f5c52159",
"text": "Received: 23 May 2011 Revised: 20 February 2012 2nd Revision: 7 September 2012 3rd Revision: 6 November 2012 Accepted: 7 November 2012 Abstract Competitive pressures are forcing organizations to be flexible. Being responsive to changing environmental conditions is an important factor in determining corporate performance. Earlier research, focusing primarily on IT infrastructure, has shown that organizational flexibility is closely related to IT infrastructure flexibility. Using real-world cases, this paper explores flexibility in the broader context of the IS function. An empirically derived framework for better understanding and managing IS flexibility is developed using grounded theory and content analysis. A process model for managing flexibility is presented; it includes steps for understanding contextual factors, recognizing reasons why flexibility is important, evaluating what needs to be flexible, identifying flexibility categories and stakeholders, diagnosing types of flexibility needed, understanding synergies and tradeoffs between them, and prescribing strategies for proactively managing IS flexibility. Three major flexibility categories, flexibility in IS operations, flexibility in IS systems & services development and deployment, and flexibility in IS management, containing 10 IS flexibility types are identified and described. European Journal of Information Systems (2014) 23, 151–184. doi:10.1057/ejis.2012.53; published online 8 January 2013",
"title": ""
},
{
"docid": "f0bbe4e6d61a808588153c6b5fc843aa",
"text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.",
"title": ""
},
{
"docid": "0024e332c0ce1adee2d29a0d2b4b6408",
"text": "Vehicles equipped with intelligent systems designed to prevent accidents, such as collision warning systems (CWSs) or lane-keeping assistance (LKA), are now on the market. The next step in reducing road accidents is to coordinate such vehicles in advance not only to avoid collisions but to improve traffic flow as well. To this end, vehicle-to-infrastructure (V2I) communications are essential to properly manage traffic situations. This paper describes the AUTOPIA approach toward an intelligent traffic management system based on V2I communications. A fuzzy-based control algorithm that takes into account each vehicle's safe and comfortable distance and speed adjustment for collision avoidance and better traffic flow has been developed. The proposed solution was validated by an IEEE-802.11p-based communications study. The entire system showed good performance in testing in real-world scenarios, first by computer simulation and then with real vehicles.",
"title": ""
},
{
"docid": "67fe4b931c2495c6833da493707e58d1",
"text": "Alan N. Steinberg Technical Director, Data Fusion ERIM International, Inc. 1101 Wilson Blvd Arlington, VA 22209 (703)528-5250 x4109 steinberg@erim-int.com Christopher L. Bowman Data Fusion and Neural Networks 1643 Hemlock Way Broomfield, CO 80020 (303)469-9828 cbowman@indra.com Franklin E. White Director, Program Development SPAWAR Systems Center San Diego, CA 92152 Chair, Data Fusion Group (619) 553-4036 whitefe@spawar.navy.mil",
"title": ""
},
{
"docid": "f7f5a0bedb0cae6f2d9fda528dfffcb9",
"text": "This paper focuses on the recognition of Activities of Daily Living (ADL) applying pattern recognition techniques to the data acquired by the accelerometer available in the mobile devices. The recognition of ADL is composed by several stages, including data acquisition, data processing, and artificial intelligence methods. The artificial intelligence methods used are related to pattern recognition, and this study focuses on the use of Artificial Neural Networks (ANN). The data processing includes data cleaning, and the feature extraction techniques to define the inputs for the ANN. Due to the low processing power and memory of the mobile devices, they should be mainly used to acquire the data, applying an ANN previously trained for the identification of the ADL. The main purpose of this paper is to present a new method implemented with ANN for the identification of a defined set of ADL with a reliable accuracy. This paper also presents a comparison of different types of ANN in order to choose the type for the implementation of the final method. Results of this research probes that the best accuracies are achieved with Deep Learning techniques with an accuracy higher than 80%.",
"title": ""
},
{
"docid": "66876eb3710afda075b62b915a2e6032",
"text": "In this paper we analyze the CS Principles project, a proposed Advanced Placement course, by focusing on the second pilot that took place in 2011-2012. In a previous publication the first pilot of the course was explained, but not in a context related to relevant educational research and philosophy. In this paper we analyze the content and the pedagogical approaches used in the second pilot of the project. We include information about the third pilot being conducted in 2012-2013 and the portfolio exam that is part of that pilot. Both the second and third pilots provide evidence that the CS Principles course is succeeding in changing how computer science is taught and to whom it is taught.",
"title": ""
},
{
"docid": "b08023089abd684d26fabefb038cc9fa",
"text": "IMSI catching is a problem on all generations of mobile telecommunication networks, i.e., 2G (GSM, GPRS), 3G (HDSPA, EDGE, UMTS) and 4G (LTE, LTE+). Currently, the SIM card of a mobile phone has to reveal its identity over an insecure plaintext transmission, before encryption is enabled. This identifier (the IMSI) can be intercepted by adversaries that mount a passive or active attack. Such identity exposure attacks are commonly referred to as 'IMSI catching'. Since the IMSI is uniquely identifying, unauthorized exposure can lead to various location privacy attacks. We propose a solution, which essentially replaces the IMSIs with changing pseudonyms that are only identifiable by the home network of the SIM's own network provider. Consequently, these pseudonyms are unlinkable by intermediate network providers and malicious adversaries, and therefore mitigate both passive and active attacks, which we also formally verified using ProVerif. Our solution is compatible with the current specifications of the mobile standards and therefore requires no change in the infrastructure or any of the already massively deployed network equipment. The proposed method only requires limited changes to the SIM and the authentication server, both of which are under control of the user's network provider. Therefore, any individual (virtual) provider that distributes SIM cards and controls its own authentication server can deploy a more privacy friendly mobile network that is resilient against IMSI catching attacks.",
"title": ""
},
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
},
{
"docid": "70c33dda7076e182ab2440e1f37186f7",
"text": "A loss of subchannel orthogonality due to timevariant multipath channels in orthogonal frequency-division multiplexing (OFDM) systems leads to interchannel interference (ICI) which increases the error floor in proportion to the Doppler frequency. In this paper, a simple frequency-domain equalization technique which can compensate for the effect of ICI in a multipath fading channel is proposed. In this technique, the equalization of the received OFDM signal is achieved by using the assumption that the channel impulse response (CIR) varies in a linear fashion during a block period and by compensating for the ICI terms that significantly affect the bit-error rate (BER) performance.",
"title": ""
},
{
"docid": "f1f574734a9a3ba579067e3ef7ce9649",
"text": "This paper presents an integrated control approach for autonomous driving comprising a corridor path planner that determines constraints on vehicle position, and a linear time-varying model predictive controller combining path planning and tracking in a road-aligned coordinate frame. The capabilities of the approach are illustrated in obstacle-free curved road-profile tracking, in an application coupling adaptive cruise control (ACC) with obstacle avoidance (OA), and in a typical driving maneuver on highways. The vehicle is modeled as a nonlinear dynamic bicycle model with throttle, brake pedal position, and steering angle as control inputs. Proximity measurements are assumed to be available within a given range field surrounding the vehicle. The proposed general feedback control architecture includes an estimator design for fusion of database information (maps), exteroceptive as well as proprioceptive measurements, a geometric corridor planner based on graph theory for the avoidance of multiple, potentially dynamically moving objects, and a spatial-based predictive controller. Switching rules for transitioning between four different driving modes, i.e., ACC, OA, obstacle-free road tracking (RT), and controlled braking (Brake), are discussed. The proposed method is evaluated on test cases, including curved and highway two-lane road tracks with static as well as moving obstacles.",
"title": ""
},
{
"docid": "cfa58ab168beb2d52fe6c2c47488e93a",
"text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.",
"title": ""
},
{
"docid": "22654d2ed4c921c7bceb22ce9f9dc892",
"text": "xv",
"title": ""
},
{
"docid": "8ddfa95b1300959ab5e84a0b66dac593",
"text": "Do you need the book of Network Science and Cybersecurity pdf with ISBN of 9781461475965? You will be glad to know that right now Network Science and Cybersecurity pdf is available on our book collections. This Network Science and Cybersecurity comes PDF and EPUB document format. If you want to get Network Science and Cybersecurity pdf eBook copy, you can download the book copy here. The Network Science and Cybersecurity we think have quite excellent writing style that make it easy to comprehend.",
"title": ""
},
{
"docid": "181a3d68fd5b5afc3527393fc3b276f9",
"text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.",
"title": ""
},
{
"docid": "0683dbfa548d90b1fcbd3d793d194e6c",
"text": "Ayurvedic medicine is an ancient Indian form of healing. It is gaining popularity as part of the growing interest in New Age spirituality and in complementary and alternative medicine (CAM). There is no cure for Asthma as per the Conventional Medical Science. Ayurvedic medicines can be a potential and effective alternative for the treatment against the bronchial asthma. Ayurvedic medicines are used for the treatment of diseases globally. The present study was a review on the management of Tamaka-Shwasa based on Ayurvedic drugs including the respiratory tonics and naturally occurring bronchodilator and immune-modulators. This study result concluded that a systematic combination of herbal and allopathic medicines is required for management of asthma.",
"title": ""
}
] |
scidocsrr
|
264a873b8e345efaf1a04b01c877b957
|
Video Normals from Colored Lights
|
[
{
"docid": "df2b4b46461d479ccf3d24d2958f81fd",
"text": "This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including surfaces with both varying diffuse and specular properties. Our optimization-based method builds on the observation that most objects are composed of a small number of fundamental materials by constraining each pixel to be representable by a combination of at most two such materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding accurate rerenderings under novel lighting conditions for a wide variety of objects. We demonstrate examples of interactive editing operations made possible by our approach.",
"title": ""
}
] |
[
{
"docid": "1e8466199d3ac46c0005551204d017bf",
"text": "Learned local descriptors based on Convolutional Neural Networks (CNNs) have achieved significant improvements on patch-based benchmarks, whereas not having demonstrated strong generalization ability on recent benchmarks of image-based 3D reconstruction. In this paper, we mitigate this limitation by proposing a novel local descriptor learning approach that integrates geometry constraints from multi-view reconstructions, which benefits the learning process in terms of data generation, data sampling and loss computation. We refer to the proposed descriptor as GeoDesc, and demonstrate its superior performance on various large-scale benchmarks, and in particular show its great success on challenging reconstruction tasks. Moreover, we provide guidelines towards practical integration of learned descriptors in Structurefrom-Motion (SfM) pipelines, showing the good trade-off that GeoDesc delivers to 3D reconstruction tasks between accuracy and efficiency.",
"title": ""
},
{
"docid": "dfb9c31c73f1ca5849f6f78c80d9fd55",
"text": "Handing over objects to humans is an essential capability for assistive robots. While there are infinite ways to hand an object, robots should be able to choose the one that is best for the human. In this paper we focus on choosing the robot and object configuration at which the transfer of the object occurs, i.e. the hand-over configuration. We advocate the incorporation of user preferences in choosing hand-over configurations. We present a user study in which we collect data on human preferences and a human-robot interaction experiment in which we compare hand-over configurations learned from human examples against configurations planned using a kinematic model of the human. We find that the learned configurations are preferred in terms of several criteria, however planned configurations provide better reachability. Additionally, we find that humans prefer hand-overs with default orientations of objects and we identify several latent variables about the robot's arm that capture significant human preferences. These findings point towards planners that can generate not only optimal but also preferable hand-over configurations for novel objects.",
"title": ""
},
{
"docid": "722c18701e7c8b9a054a9603eb6bf8f4",
"text": "We report in this case-study paper our experience and success story with a practical approach and tool for unit regression testing of a SCADA (Supervisory Control and Data Acquisition) software. The tool uses a black-box specification of the units under test to automatically generate NUnit test code. We then improved the test suite by white-box and mutation testing. The approach and tool were developed in an action-research project to test a commercial large-scale SCADA system called Rocket.",
"title": ""
},
{
"docid": "1fcdfd02a6ecb12dec5799d6580c67d4",
"text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.",
"title": ""
},
{
"docid": "0aa85d4ac0f2034351d5ba690929db19",
"text": "The quantity of small scale solar photovoltaic (PV) arrays in the United States has grown rapidly in recent years. As a result, there is substantial interest in high quality information about the quantity, power capacity, and energy generated by such arrays, including at a high spatial resolution (e.g., cities, counties, or other small regions). Unfortunately, existing methods for obtaining this information, such as surveys and utility interconnection filings, are limited in their completeness and spatial resolution. This work presents a computer algorithm that automatically detects PV panels using very high resolution color satellite imagery. The approach potentially offers a fast, scalable method for obtaining accurate information on PV array location and size, and at much higher spatial resolutions than are currently available. The method is validated using a very large (135 km) collection of publicly available (Bradbury et al., 2016) aerial imagery, with over 2700 human annotated PV array locations. The results demonstrate the algorithm is highly effective on a per-pixel basis. It is likewise effective at object-level PV array detection, but with significant potential for improvement in estimating the precise shape/size of the PV arrays. These results are the first of their kind for the detection of solar PV in aerial imagery, demonstrating the feasibility of the approach and establishing a baseline performance for future investigations. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f629f426943b995a304f3d35b7090cda",
"text": "We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than languagespecific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-ofthe-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning “from scratch” in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.",
"title": ""
},
{
"docid": "3bebd1c272b1cba24f6aeeabaa5c54d2",
"text": "Cloacal anomalies occur when failure of the urogenital septum to separate the cloacal membrane results in the urethra, vagina, rectum and anus opening into a single common channel. The reported incidence is 1:50,000 live births. Short-term paediatric outcomes of surgery are well reported and survival into adulthood is now usual, but long-term outcome data are less comprehensive. Chronic renal failure is reported to occur in 50 % of patients with cloacal anomalies, and 26–72 % (dependant on the length of the common channel) of patients experience urinary incontinence in adult life. Defaecation is normal in 53 % of patients, with some managed by methods other than surgery, including medication, washouts, stoma and antegrade continent enema. Gynaecological anomalies are common and can necessitate reconstructive surgery at adolescence for menstrual obstruction. No data are currently available on sexual function and little on the quality of life. Pregnancy is extremely rare and highly risky. Patient care should be provided by a multidisciplinary team with experience in managing these and other related complex congenital malformations. However, there is an urgent need for a well-planned, collaborative multicentre prospective study on the urological, gastrointestinal and gynaecological aspects of this rare group of complex conditions.",
"title": ""
},
{
"docid": "b47b06f8548716e0ef01a0e113d48e5d",
"text": "This paper proposes a framework to automatically construct taxonomies from a corpus of text documents. This framework first extracts terms from documents using a part-of-speech parser. These terms are then filtered using domain pertinence, domain consensus, lexical cohesion, and structural relevance. The remaining terms represent concepts in the taxonomy. These concepts are arranged in a hierarchy with either the extended subsumption method that accounts for concept ancestors in determining the parent of a concept or a hierarchical clustering algorithm that uses various text-based window and document scopes for concept co-occurrences. Our evaluation in the field of management and economics indicates that a trade-off between taxonomy quality and depth must be made when choosing one of these methods. The subsumption method is preferable for shallow taxonomies, whereas the hierarchical clustering algorithm is recommended for deep taxonomies.",
"title": ""
},
{
"docid": "2bd9c5e042feb6a8a6ab0b1c6e97b06f",
"text": "Stacie and Meg—juniors at Atlas High School—soon must submit their course requests for next year. They have completed 3 years of science as mandated by the school system and must decide whether to take additional courses. Physics is an option, and although it is not required they believe that taking it may help with college admission. To date they have received similar grades (As and Bs) in science courses. The night before the class sign-up date they discuss the situation with their parents. Meg's dad feels that she should take physics since it will help her understand how the world works. Meg notes that Ms. Blakely (the physics teacher) is not very good. After further discussion, however, Meg concludes that she feels confident about learning physics because she always has been able to learn science in the past and that if she does not understand something she will ask the teacher. So Meg decides to sign up for it. Stacie, on the other hand, tells her parents that she just does not feel smart enough to learn or do well in physics and that because Ms. Blakely is not a good teacher Stacie would not receive much help from her. Stacie also tells her parents that few girls take the course. Under no pressure from her parents, Stacie decides she will not sign up for physics.",
"title": ""
},
{
"docid": "a45109840baf74c61b5b6b8f34ac81d5",
"text": "Decision-making groups can potentially benefit from pooling members' information, particularly when members individually have partial and biased information but collectively can compose an unbiased characterization of the decision alternatives. The proposed biased sampling model of group discussion, however, suggests that group members often fail to effectively pool their information because discussion tends to be dominated by (a) information that members hold in common before discussion and (b) information that supports members' existent preferences. In a political caucus simulation, group members individually read candidate descriptions that contained partial information biased against the most favorable candidate and then discussed the candidates as a group. Even though groups could have produced unbiased composites of the candidates through discussion, they decided in favor of the candidate initially preferred by a plurality rather than the most favorable candidate. Group members' preand postdiscussion recall of candidate attributes indicated that discussion tended to perpetuate, not to correct, members' distorted pictures of the candidates.",
"title": ""
},
{
"docid": "464f7d25cb2a845293a3eb8c427f872f",
"text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.",
"title": ""
},
{
"docid": "79b3ed4c5e733c73b5e7ebfdf6069293",
"text": "This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.",
"title": ""
},
{
"docid": "4abd7884b97c1af7c24a81da7a6c0c3d",
"text": "AIM\nThe interaction between running, stretching and practice jumps during warm-up for jumping tests has not been investigated. The purpose of the present study was to compare the effects of running, static stretching of the leg extensors and practice jumps on explosive force production and jumping performance.\n\n\nMETHODS\nSixteen volunteers (13 male and 3 female) participated in five different warm-ups in a randomised order prior to the performance of two jumping tests. The warm-ups were control, 4 min run, static stretch, run + stretch, and run + stretch + practice jumps. After a 2 min rest, a concentric jump and a drop jump were performed, which yielded 6 variables expressing fast force production and jumping performance of the leg extensor muscles (concentric jump height, peak force, rate of force developed, drop jump height, contact time and height/time).\n\n\nRESULTS\nGenerally the stretching warm-up produced the lowest values and the run or run + stretch + jumps warm-ups produced the highest values of explosive force production. There were no significant differences (p<0.05) between the control and run + stretch warm-ups, whereas the run yielded significantly better scores than the run + stretch warm-up for drop jump height (3.2%), concentric jump height (3.4%) and peak concentric force (2.7%) and rate of force developed (15.4%).\n\n\nCONCLUSION\nThe results indicated that submaximum running and practice jumps had a positive effect whereas static stretching had a negative influence on explosive force and jumping performance. It was suggested that an alternative for static stretching should be considered in warm-ups prior to power activities.",
"title": ""
},
{
"docid": "bf0cfb73aad56e56773e0788d6111208",
"text": "Successful open source communities are constantly looking for new members and helping them become active developers. A common approach for developer onboarding in open source projects is to let newcomers focus on relevant yet easy-to-solve issues to familiarize themselves with the code and the community. The goal of this research is twofold. First, we aim at automatically identifying issues that newcomers can resolve by analyzing the history of resolved issues by simply using the title and description of issues. Second, we aim at automatically identifying issues, that can be resolved by newcomers who later become active developers. We mined the issue trackers of three large open source projects and extracted natural language features from the title and description of resolved issues. In a series of experiments, we optimized and compared the accuracy of four supervised classifiers to address our research goals. Random Forest, achieved up to 91% precision (F1-score 72%) towards the first goal while for the second goal, Decision Tree achieved a precision of 92% (F1-score 91%). A qualitative evaluation gave insights on what information in the issue description is helpful for newcomers. Our approach can be used to automatically identify, label, and recommend issues for newcomers in open source software projects based only on the text of the issues.",
"title": ""
},
{
"docid": "d799257d4a78401bf25e492250b64da8",
"text": "We examined anticipatory mechanisms of reward-motivated memory formation using event-related FMRI. In a monetary incentive encoding task, cues signaled high- or low-value reward for memorizing an upcoming scene. When tested 24 hr postscan, subjects were significantly more likely to remember scenes that followed cues for high-value rather than low-value reward. A monetary incentive delay task independently localized regions responsive to reward anticipation. In the encoding task, high-reward cues preceding remembered but not forgotten scenes activated the ventral tegmental area, nucleus accumbens, and hippocampus. Across subjects, greater activation in these regions predicted superior memory performance. Within subject, increased correlation between the hippocampus and ventral tegmental area was associated with enhanced long-term memory for the subsequent scene. These findings demonstrate that brain activation preceding stimulus encoding can predict declarative memory formation. The findings are consistent with the hypothesis that reward motivation promotes memory formation via dopamine release in the hippocampus prior to learning.",
"title": ""
},
{
"docid": "29ce7251e5237b0666cef2aee7167126",
"text": "Chinese characters have a huge set of character categories, more than 20, 000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a compact set of about 500 fundamental and structural radicals. This paper introduces a novel radical analysis network (RAN) to recognize printed Chinese characters by identifying radicals and analyzing two-dimensional spatial structures among them. The proposed RAN first extracts visual features from input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, aiming at generating captions of Chinese characters by detecting radicals and two-dimensional structures through a spatial attention mechanism. The manner of treating a Chinese character as a composition of radicals rather than a single character class largely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes, namely zero-shot learning.",
"title": ""
},
{
"docid": "c760e6db820733dc3f57306eef81e5c9",
"text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.",
"title": ""
},
{
"docid": "019c2d5927e54ae8ce3fc7c5b8cff091",
"text": "In this paper, we present Affivir, a video browsing system that recommends Internet videos that match a user’s affective preference. Affivir models a user’s watching behavior as sessions, and dynamically adjusts session parameters to cater to the user’s current mood. In each session, Affivir discovers a user’s affective preference through user interactions, such as watching or skipping videos. Affivir uses video affective features (motion, shot change rate, sound energy, and audio pitch average) to retrieve videos that have similar affective responses. To efficiently search videos of interest from our video repository, all videos in the repository are pre-processed and clustered. Our experimental results shows that Affivir has made a significant improvement in user satisfaction and enjoyment, compared with several other popular baseline approaches.",
"title": ""
},
{
"docid": "1d2f35fb17183a215e864693712fa75b",
"text": "Improving the coding efficiency is the eternal theme in video coding field. The traditional way for this purpose is to reduce the redundancies inside videos by adding numerous coding options at the encoder side. However, no matter what we have done, it is still hard to guarantee the optimal coding efficiency. On the other hand, the decoded video can be treated as a certain compressive sampling of the original video. According to the compressive sensing theory, it might be possible to further enhance the quality of the decoded video by some restoration methods. Different from the traditional methods, without changing the encoding algorithm, this paper focuses on an approach to improve the video's quality at the decoder end, which equals to further boosting the coding efficiency. Furthermore, we propose a very deep convolutional neural network to automatically remove the artifacts and enhance the details of HEVC-compressed videos, by utilizing that underused information left in the bit-streams and external images. Benefit from the prowess and efficiency of the fully end-to-end feed forward architecture, our approach can be treated as a better decoder to efficiently obtain the decoded frames with higher quality. Extensive experiments indicate our approach can further improve the coding efficiency post the deblocking and SAO in current HEVC decoder, averagely 5.0%, 6.4%, 5.3%, 5.5% BD-rate reduction for all intra, lowdelay P, lowdelay B and random access configurations respectively. This method can aslo be extended to any video coding standards.",
"title": ""
},
{
"docid": "7adf46bb0a4ba677e58aee9968d06293",
"text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.",
"title": ""
}
] |
scidocsrr
|
4394e6d94766a9cfbe981a5d83dcb915
|
Freebase QA: Information Extraction or Semantic Parsing?
|
[
{
"docid": "3339dc9ecf49fc181077037424f62ca7",
"text": "Supervised training procedures for semantic parsers produce high-quality semantic parsers, but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data. We present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms, schema matching, and pattern learning. Leveraging techniques from each of these areas, we develop a semantic parser for Freebase that is capable of parsing questions with an F1 that improves by 0.42 over a purely-supervised learning algorithm.",
"title": ""
}
] |
[
{
"docid": "57f1671f7b73f0b888f55a1f31a9f1a1",
"text": "The ongoing high relevance of business intelligence (BI) for the management and competitiveness of organizations requires a continuous, transparent, and detailed assessment of existing BI solutions in the enterprise. This paper presents a BI maturity model (called biMM) that has been developed and refined over years. It is used for both, in surveys to determine the overall BI maturity in German speaking countries and for the individual assessment in organizations. A recently conducted survey shows that the current average BI maturity can be assigned to the third stage (out of five stages). Comparing future (planned) activities and current challenges allows the derivation of a BI research agenda. The need for action includes among others emphasizing BI specific organizational structures, such as the establishment of BI competence centers, a stronger focus on profitability, and improved effectiveness of the BI architecture.",
"title": ""
},
{
"docid": "29eb7a0778af542ce568d663cf45bfe8",
"text": "The goal of this study is to determine how gamers’ reactions to male voices differ from reactions to female voices. The authors conducted an observational study with an experimental design to play in and record multiplayer matches (N = 245) of a video game. The researchers played against 1,660 unique gamers and broadcasted pre-recorded audio clips of either a man or a woman speaking. Gamers’ reactions were digitally recorded, capturing what was said and heard during the game. Independent coders were used to conduct a quantitative content analysis of game data. Findings indicate that, on average, the female voice received three times as many negative comments as the male voice or no voice. In addition, the female voice received more queries and more messages from other gamers than the male voice or no voice.",
"title": ""
},
{
"docid": "d5dce73957da864062d45799471b06a4",
"text": "T elusive dream of replacing missing teeth with artificial analogs has been part of dentistry for a thousand years. The coincidental discovery by Dr P-I Brånemark and his coworkers of the tenacious affinity between living bone and titanium oxides, termed osseointegration, propelled dentistry into a new age of reconstructive dentistry. Initially, the essential tenets for obtaining osseointegration dictated the atraumatic placement of a titanium screw into viable bone and a prolonged undisturbed, submerged healing period. By definition, this required a 2-stage surgical procedure. To comply, a coupling mechanism for implant placement and the eventual attachment of a transmucosal extension for restoration was explored. The initial coronal design selected was a 0.7-mm-tall external hexagon. At its inception, the design made perfect sense, because it permitted engagement of a torque transfer coupling device (fixture mount) during the surgical placement of the implant into threaded bone and the subsequent second-stage connection of the transmucosal extension that, when used in series, could effectively restore an edentulous arch. As 20 years of osseointegration in clinical practice in North America have transpired, much has changed. The efficacy and predictability of osseointegrated implants are no longer issues.1–7 During the initial years, research focused on refinements in surgical techniques and grafting procedures. Eventually, the emphasis shifted to a variety of mechanical and esthetic challenges that remained problematic and unresolved.8–10 During this period, the envelope of implant utilization dramatically expanded from the original complete edentulous application to fixed partial dentures, single-tooth replacement, maxillofacial and a myriad of other applications, limited only by the ingenuity and skill of the clinician.11–13 The external hexagonal design, ad modum Brånemark, originally intended as a coupling and rotational torque transfer mechanism, consequently evolved by necessity into a prosthetic indexing and antirotational mechanism.14,15 The expanded utilization of the hexagonal resulted in a number of significant clinical complications.8–11,16–22 To mitigate these problems, the external hexagonal, its transmucosal connections, and their retaining screws have undergone a number of modifications.23 In 1992, English published an overview of the thenavailable external hexagonal implants, numbering 25 different implants, all having the standard Brånemark hex configuration.14 The external hex has since been modified and is now available in heights of 0.7, 0.9, 1.0, and 1.2 mm and with flat-to-flat widths of 2.0, 2.4, 2.7, 3.0, 3.3, and 3.4 mm, depending on the implant platform. The available number of hexagonal implants has more than doubled. The abutment-retaining screw has also been modified with respect to material, shank length, number of threads, diameter, length, thread design, and torque application (unpublished data, 1998).23 Entirely new secondand third-generation interface coupling geometries have also been introduced into the implant milieu to overcome intrinsic hexagonal deficiencies.24–28 Concurrent with the evolution of the coupling geometry was the introduction of a variety of new implant body shapes, diameters, thread patterns, and surface topography.26,27,29–36 Today, the clinician is overwhelmed with more than 90 root-form implants to select from in a variety of diameters, lengths, surfaces, platforms, interfaces, and body designs. Virtually every implant company manufactures a hex top, a proprietary interface, or both; “narrow,” “standard,” and “wide” diameter implant bodies; machined, textured, and hydroxyapatite (HA) and titanium plasma-spray (TPS) surface implants; and a variety of lengths and body shapes (Table 1). In the wide-diameter arena alone, there are 25 different offerings, 15 external hexagonal, and 10 other interfaces available in a number of configurations. 1Adjunct Professor of Prosthodontics, Graduate Prosthodontics, Indiana University, Indianapolis, Indiana; Assistant Research Scientist, Department of Restorative Dentistry, University of California at San Francisco; and Private Practice, Roseville, California.",
"title": ""
},
{
"docid": "1ec8f8e1b34ebcf8a0c99975d2fa58c4",
"text": "BACKGROUND\nTo compare simultaneous recordings from an external patch system specifically designed to ensure better P-wave recordings and standard Holter monitor to determine diagnostic efficacy. Holter monitors are a mainstay of clinical practice, but are cumbersome to access and wear and P-wave signal quality is frequently inadequate.\n\n\nMETHODS\nThis study compared the diagnostic efficacy of the P-wave centric electrocardiogram (ECG) patch (Carnation Ambulatory Monitor) to standard 3-channel (leads V1, II, and V5) Holter monitor (Northeast Monitoring, Maynard, MA). Patients were referred to a hospital Holter clinic for standard clinical indications. Each patient wore both devices simultaneously and served as their own control. Holter and Patch reports were read in a blinded fashion by experienced electrophysiologists unaware of the findings in the other corresponding ECG recording. All patients, technicians, and physicians completed a questionnaire on comfort and ease of use, and potential complications.\n\n\nRESULTS\nIn all 50 patients, the P-wave centric patch recording system identified rhythms in 23 patients (46%) that altered management, compared to 6 Holter patients (12%), P<.001. The patch ECG intervals PR, QRS and QT correlated well with the Holter ECG intervals having correlation coefficients of 0.93, 0.86, and 0.94, respectively. Finally, 48 patients (96%) preferred wearing the patch monitor.\n\n\nCONCLUSIONS\nA single-channel ambulatory patch ECG monitor, designed specifically to ensure that the P-wave component of the ECG be visible, resulted in a significantly improved rhythm diagnosis and avoided inaccurate diagnoses made by the standard 3-channel Holter monitor.",
"title": ""
},
{
"docid": "9ce2aaa0ad3bfe383099782c46746819",
"text": "To achieve high production of rosmarinic acid and derivatives in Escherichia coli which are important phenolic acids found in plants, and display diverse biological activities. The synthesis of rosmarinic acid was achieved by feeding caffeic acid and constructing an artificial pathway for 3,4-dihydroxyphenyllactic acid. Genes encoding the following enzymes: rosmarinic acid synthase from Coleus blumei, 4-coumarate: CoA ligase from Arabidopsis thaliana, 4-hydroxyphenyllactate 3-hydroxylase from E. coli and d-lactate dehydrogenase from Lactobacillus pentosus, were overexpressed in an l-tyrosine over-producing E. coli strain. The yield of rosmarinic acid reached ~130 mg l−1 in the recombinant strain. In addition, a new intermediate, caffeoyl-phenyllactate (~55 mg l−1), was also produced by the engineered E. coli strain. This work not only leads to high yield production of rosmarinic acid and analogues, but also sheds new light on the construction of the pathway of rosmarinic acid in E. coli.",
"title": ""
},
{
"docid": "69664b0f958b1ff56d4ac64869a11b02",
"text": "The primary objective of this study was to examine the kinematics and kinetics of the shoulder during wheelchair propulsion at a slow and moderate speed. Twenty-seven individuals with paraplegia propelled their wheelchairs at speeds of 0.9 m/s and 1.8 m/s while a motion analysis system captured movements of their upper limbs and SMART(Wheel)s simultaneously recorded their pushrim kinetics. Intraclass R correlation and Cronbach's coefficient alpha statistics revealed that all shoulder parameters were stable and consistent between strokes and speeds. The shoulder exhibited a greater range of motion, and forces and moments at the shoulder were 1.2 to 2.0 times greater (p < 0.05) during the 1.8 m/s speed trial. Peak posterior forces occurred near the end of the propulsion phase, and at the same time, the shoulder was maximally flexed and minimally abducted (p > 0.1). Shoulder positioning and the associated peak shoulder loads during propulsion may be important indicators for identifying manual wheelchair users at risk for developing shoulder pain and injury.",
"title": ""
},
{
"docid": "e49dcbcb0bb8963d4f724513d66dd3a0",
"text": "To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.",
"title": ""
},
{
"docid": "9124e6f3679d4a86b568a2382cad6970",
"text": "Text. Linear Algebra and its Applications, David Lay, 5th edition. ISBN-13: 978-0321982384 The book can be purchased from the University Bookstore, or bought online, but you are responsible for making sure you purchase the correct book. If you buy an older edition, it is your responsibility to make sure you’re reading the correct sections and doing the correct homework problems. I strongly recommend you try for a new or used version of this edition.",
"title": ""
},
{
"docid": "f2d27b79f1ac3809f7ea605203136760",
"text": "The Internet of Things (IoT) is a fast-growing movement turning devices into always-connected smart devices through the use of communication technologies. This facilitates the creation of smart strategies allowing monitoring and optimization as well as many other new use cases for various sectors. Low Power Wide Area Networks (LPWANs) have enormous potential as they are suited for various IoT applications and each LPWAN technology has certain features, capabilities and limitations. One of these technologies, namely LoRa/LoRaWAN has several promising features and private and public LoRaWANs are increasing worldwide. Similarly, researchers are also starting to study the potential of LoRa and LoRaWANs. This paper examines the work that has already been done and identifies flaws and strengths by performing a comparison of created testbeds. Limitations of LoRaWANs are also identified.",
"title": ""
},
{
"docid": "719c0101da1ddd2029974f5a795a48f7",
"text": "This article describes color naming by 51 American English-speaking informants. A free-naming task produced 122 monolexemic color terms, with which informants named the 330 Munsell samples from the World Color Survey. Cluster analysis consolidated those terms into a glossary of 20 named color categories: the 11 Basic Color Term (BCT) categories of Berlin and Kay (1969, p. 2) plus nine nonbasic chromatic categories. The glossed data revealed two color-naming motifs: the green-blue motif of the World Color Survey and a novel green-teal-blue motif, which featured peach, teal, lavender, and maroon as high-consensus terms. Women used more terms than men, and more women expressed the novel motif. Under a constrained-naming protocol, informants supplied BCTs for the color samples previously given nonbasic terms. Most of the glossed nonbasic terms from the free-naming task named low-consensus colors located at the BCT boundaries revealed by the constrained-naming task. This study provides evidence for continuing evolution of the color lexicon of American English, and provides insight into the processes governing this evolution.",
"title": ""
},
{
"docid": "4d95cf6e1d801721fa7f588b25388528",
"text": "Compression bandaging is the most common therapy used to treat venous ulceration. The bandages must be applied so that they generate a specific pressure profile in order for the treatment to be effective. No method currently exists to monitor the pressure applied by the bandage over a number of days outside of a laboratory setting. A portable device was developed that is capable of monitoring sub-bandage pressure as the user goes about their daily routine. The device consists of four Tekscan FlexiForce A401-series force sensors connected to an excitation circuit and PIC microcontroller circuit. It is capable of measuring pressures in the range of 0 - 96 mmHg. These sensors were chosen because they are cheap, thin, flexible and durable. Both circuits are housed in a protective case that attaches to the users leg. Preliminary results correspond with the pressure values stated in the literature and the device is capable of generating accurate sub-bandage pressure data.",
"title": ""
},
{
"docid": "b64a91ca7cdeb3dfbe5678eee8962aa7",
"text": "Computational thinking is gaining recognition as an important skill set for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course within the curriculum, and there is little consensus on what exactly computational thinking entails and how to teach and evaluate it. To address these concerns, we have developed a computational thinking framework to be used as a planning and evaluative tool. Within this framework, we aim to unify the differing opinions about what computational thinking should involve. As a case study, we have applied the framework to Light-Bot, an educational game with a strong focus on programming, and found that the framework provides us with insight into the usefulness of the game to reinforce computer science concepts.",
"title": ""
},
{
"docid": "7a86db2874d602e768d0641bb18ae0c3",
"text": "Most work in reinforcement learning (RL) is based on discounted techniques, such as Q learning, where long-term rewards are geometrically attenuated based on the delay in their occurence. Schwartz recently proposed an undiscounted RL technique called R learning that optimizes average reward, and argued that it was a better metric than the discounted one optimized by Q learning. In this paper we compare R learning with Q learning on a simulated robot box-pushing task. We compare these two techniques across three diierent exploration strategies: two of them undirected, Boltz-mann and semi-uniform, and one recency-based directed strategy. Our results show that Q learning performs better than R learning , even when both are evaluated using the same undiscounted performance measure. Furthermore, R learning appears to be very sensitive to choice of exploration strategy. In particular, a surprising result is that R learn-ing's performance noticeably deteriorates under Boltzmann exploration. We identify precisely a limit cycle situation that causes R learning's performance to deteriorate when combined with Boltzmann exploration, and show where such limit cycles arise in our robot task. However, R learning performs much better (although not as well as Q learning) when combined with semi-uniform and recency-based exploration. In this paper, we also argue for using medians over means as a better distribution-free estimator of average performance, and describe a simple non-parametric signiicance test for comparing learning data from two RL techniques.",
"title": ""
},
{
"docid": "32ad371dcf7d234aba4b052806186f05",
"text": "Models are invaluable tools for strategic planning. Models help key decision makers develop a shared conceptual understanding of complex decisions, identify sensitivity factors and test management scenarios. Different modelling approaches are specialist areas in themselves. Model development can be onerous, expensive, time consuming, and often bewildering. It is also an iterative process where the true magnitude of the effort, time and data required is often not fully understood until well into the process. This paper explores the traditional approaches to strategic planning modelling commonly used in organisations and considers the application of a real-options approach to match and benefit from the increasing uncertainty in today’s rapidly changing world.",
"title": ""
},
{
"docid": "184da4d4589a3a9dc1f339042e6bc674",
"text": "Ocular dominance plasticity has long served as a successful model for examining how cortical circuits are shaped by experience. In this paradigm, altered retinal activity caused by unilateral eye-lid closure leads to dramatic shifts in the binocular response properties of neurons in the visual cortex. Much of the recent progress in identifying the cellular and molecular mechanisms underlying ocular dominance plasticity has been achieved by using the mouse as a model system. In this species, monocular deprivation initiated in adulthood also causes robust ocular dominance shifts. Research on ocular dominance plasticity in the mouse is starting to provide insight into which factors mediate and influence cortical plasticity in juvenile and adult animals.",
"title": ""
},
{
"docid": "4dd59c743d7f4ae1f6a05f20a4bd6935",
"text": "Self-attentive feed-forward sequence models have been shown to achieve impressive results on sequence modeling tasks including machine translation [31], image generation [30] and constituency parsing [18], thereby presenting a compelling alternative to recurrent neural networks (RNNs) which has remained the de-facto standard architecture for many sequence modeling problems to date. Despite these successes, however, feed-forward sequence models like the Transformer [31] fail to generalize in many tasks that recurrent models handle with ease (e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time [28]). Moreover, and in contrast to RNNs, the Transformer model is not computationally universal, limiting its theoretical expressivity. In this paper we propose the Universal Transformer which addresses these practical and theoretical shortcomings and we show that it leads to improved performance on several tasks. Instead of recurring over the individual symbols of sequences like RNNs, the Universal Transformer repeatedly revises its representations of all symbols in the sequence with each recurrent step. In order to combine information from different parts of a sequence, it employs a self-attention mechanism in every recurrent step. Assuming sufficient memory, its recurrence makes the Universal Transformer computationally universal. We further employ an adaptive computation time (ACT) mechanism to allow the model to dynamically adjust the number of times the representation of each position in a sequence is revised. Beyond saving computation, we show that ACT can improve the accuracy of the model. Our experiments show that on various algorithmic tasks and a diverse set of large-scale language understanding tasks the Universal Transformer generalizes significantly better and outperforms both a vanilla Transformer and an LSTM in machine translation, and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task.",
"title": ""
},
{
"docid": "24c1e7f3958cb284d7c1197efdf26785",
"text": "Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs. Recent advances in GPU hardware have led to the emergence of bi-directional LSTMs as a standard method for obtaining pertoken vector representations serving as input to labeling tasks such as NER (often followed by prediction in a linear-chain CRF). Though expressive and accurate, these models fail to fully exploit GPU parallelism, limiting their computational efficiency. This paper proposes a faster alternative to Bi-LSTMs for NER: Iterated Dilated Convolutional Neural Networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, ID-CNNs permit fixed-depth convolutions to run in parallel across entire documents. We describe a distinct combination of network structure, parameter sharing and training procedures that enable dramatic 14-20x testtime speedups while retaining accuracy comparable to the Bi-LSTM-CRF. Moreover, ID-CNNs trained to aggregate context from the entire document are even more accurate while maintaining 8x faster test time speeds.",
"title": ""
},
{
"docid": "7aaa1a835cfb68489df94d9fa6026bfe",
"text": "Pseudorandom generators (PRGs) are used in modern cryptography to transform a small initial value into a long sequence of seemingly random bits. Many designs for PRGs are based on linear feedback shift registers (LFSRs), which can be constructed in such a way as to have optimal statistical and periodical properties. This thesis discusses construction principles and cryptanalytic attacks against LFSR-based PRGs. After providing a full survey of existing cryptanalytical results, we introduce and analyse the dynamic linear consistency test (DLCT), a search-tree based method for reconstructing the inner state of a PRG. We conclude by discussing the role of the inner state size in PRG design, giving lower bounds as well as examples from practice that indicate the necessary size of a secure PRG.",
"title": ""
},
{
"docid": "4dc9360837b5793a7c322f5b549fdeb1",
"text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering",
"title": ""
},
{
"docid": "887b43b17bc273e7478feaecb0ff9cba",
"text": "For many years, there has been considerable debate about whether the IT revolution was paying off in higher productivity. Studies in the 1980s found no connection between IT investment and productivity in the U.S. economy, a situation referred to as the productivity paradox. Since then, a decade of studies at the firm and country level has consistently shown that the impact of IT investment on labor productivity and economic growth is significant and positive. This article critically reviews the published research, more than 50 articles, on computers and productivity. It develops a general framework for classifying the research, which facilitates identifying what we know, how well we know it, and what we do not know. The framework enables us to systematically organize, synthesize, and evaluate the empirical evidence and to identify both limitations in existing research and data and substantive areas for future research.The review concludes that the productivity paradox as first formulated has been effectively refuted. At both the firm and the country level, greater investment in IT is associated with greater productivity growth. At the firm level, the review further concludes that the wide range of performance of IT investments among different organizations can be explained by complementary investments in organizational capital such as decentralized decision-making systems, job training, and business process redesign. IT is not simply a tool for automating existing processes, but is more importantly an enabler of organizational changes that can lead to additional productivity gains.In mid-2000, IT capital investment began to fall sharply due to slowing economic growth, the collapse of many Internet-related firms, and reductions in IT spending by other firms facing fewer competitive pressures from Internet firms. This reduction in IT investment has had devastating effects on the IT-producing sector, and may lead to slower economic and productivity growth in the U.S. economy. While the turmoil in the technology sector has been unsettling to investors and executives alike, this review shows that it should not overshadow the fundamental changes that have occurred as a result of firms' investments in IT. Notwithstanding the demise of many Internet-related companies, the returns to IT investment are real, and innovative companies continue to lead the way.",
"title": ""
}
] |
scidocsrr
|
f1b824d508c4dd33ccc009230832e5b3
|
"All I know about politics is what I read in Twitter": Weakly Supervised Models for Extracting Politicians' Stances From Twitter
|
[
{
"docid": "8a7bd0858a51380ed002b43b08a1c9f1",
"text": "Unbiased language is a requirement for reference sources like encyclopedias and scientific texts. Bias is, nonetheless, ubiquitous, making it crucial to understand its nature and linguistic realization and hence detect bias automatically. To this end we analyze real instances of human edits designed to remove bias from Wikipedia articles. The analysis uncovers two classes of bias: framing bias, such as praising or perspective-specific words, which we link to the literature on subjectivity; and epistemological bias, related to whether propositions that are presupposed or entailed in the text are uncontroversially accepted as true. We identify common linguistic cues for these classes, including factive verbs, implicatives, hedges, and subjective intensifiers. These insights help us develop features for a model to solve a new prediction task of practical importance: given a biased sentence, identify the bias-inducing word. Our linguistically-informed model performs almost as well as humans tested on the same task.",
"title": ""
},
{
"docid": "bfeff1e1ef24d0cb92d1844188f87cc8",
"text": "While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users’ profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users’ attributes based on their tweets.1",
"title": ""
}
] |
[
{
"docid": "3c695b12b47f358012f10dc058bf6f6a",
"text": "This paper addresses the problem of classifying places in the environment of a mobile robot into semantic categories. We believe that semantic information about the type of place improves the capabilities of a mobile robot in various domains including localization, path-planning, or human-robot interaction. Our approach uses AdaBoost, a supervised learning algorithm, to train a set of classifiers for place recognition based on laser range data. In this paper we describe how this approach can be applied to distinguish between rooms, corridors, doorways, and hallways. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various environments.",
"title": ""
},
{
"docid": "ab08118b53dd5eee3579260e8b23a9c5",
"text": "We have trained a deep (convolutional) neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials. On randomly generated potentials, for which there is no analytic form for either the potential or the ground-state energy, the neural network model was able to predict the ground-state energy to within chemical accuracy, with a median absolute error of 1.49 mHa. We also investigate the performance of the model in predicting other quantities such as the kinetic energy and the first excited-state energy of random potentials. While we demonstrated this approach on a simple, tractable problem, the transferability and excellent performance of the resulting model suggests further applications of deep neural networks to problems of electronic structure.",
"title": ""
},
{
"docid": "01ebd4b68fb94fc5defaff25c2d294b0",
"text": "High data rate E-band (71 GHz- 76 GHz, 81 GHz - 86 GHz, 92 GHz - 95 GHz) communication systems will benefit from power amplifiers that are more than twice as powerful than commercially available GaAs pHEMT MMICs. We report development of three stage GaN MMIC power amplifiers for E-band radio applications that produce 500 mW of saturated output power in CW mode and have > 12 dB of associated power gain. The output power density from 300 mum output gate width GaN MMICs is seven times higher than the power density of commercially available GaAs pHEMT MMICs in this frequency range.",
"title": ""
},
{
"docid": "9af4c955b7c08ca5ffbfabc9681f9525",
"text": "The emergence of deep neural networks (DNNs) as a state-of-the-art machine learning technique has enabled a variety of artificial intelligence applications for image recognition, speech recognition and translation, drug discovery, and machine vision. These applications are backed by large DNN models running in serving mode on a cloud computing infrastructure to process client inputs such as images, speech segments, and text segments. Given the compute-intensive nature of large DNN models, a key challenge for DNN serving systems is to minimize the request response latencies. This paper characterizes the behavior of different parallelism techniques for supporting scalable and responsive serving systems for large DNNs. We identify and model two important properties of DNN workloads: 1) homogeneous request service demand and 2) interference among requests running concurrently due to cache/memory contention. These properties motivate the design of serving deep learning systems fast (SERF), a dynamic scheduling framework that is powered by an interference-aware queueing-based analytical model. To minimize response latency for DNN serving, SERF quickly identifies and switches to the optimal parallel configuration of the serving system by using both empirical and analytical methods. Our evaluation of SERF using several well-known benchmarks demonstrates its good latency prediction accuracy, its ability to correctly identify optimal parallel configurations for each benchmark, its ability to adapt to changing load conditions, and its efficiency advantage (by at least three orders of magnitude faster) over exhaustive profiling. We also demonstrate that SERF supports other scheduling objectives and can be extended to any general machine learning serving system with the similar parallelism properties as above.",
"title": ""
},
{
"docid": "33f610dbc42bd50af0a8da5a6b464c8b",
"text": "Speech research has made tremendous progress in the past using the following paradigm: de ne the research problem, collect a corpus to objectively measure progress, and solve the research problem. Natural language research, on the other hand, has typically progressed without the bene t of any corpus of data with which to test research hypotheses. We describe the Air Travel Information System (ATIS) pilot corpus, a corpus designed to measure progress in Spoken Language Systems that include both a speech and natural language component. This pilot marks the rst full-scale attempt to collect such a corpus and provides guidelines for future e orts.",
"title": ""
},
{
"docid": "a162277bc8e10484211ff4a4dee116e6",
"text": "BACKGROUND\nHunter syndrome (mucopolysaccharidosis type II (MPS II)) is a rare metabolic disease that can severely compromise health, well-being and life expectancy. Little evidence has been published on the impact of MPS II on health-related quality of life (HRQL). The objective of this study was to describe this impact using the Hunter Syndrome-Functional Outcomes for Clinical Understanding Scale (HS-FOCUS) questionnaire and a range of standard validated questionnaires previously used in paediatric populations.\n\n\nMETHODS\nClinical and demographic characteristics collected in a clinical trial and responses to four HRQL questionnaires completed both by patients and parents prior to enzyme replacement treatment were used. The association between questionnaire scores and clinical function parameters were tested using Spearman rank-order correlations. Results were compared to scores in other paediatric populations with chronic conditions obtained through a targeted literature search of published studies.\n\n\nRESULTS\nOverall, 96 male patients with MPS II and their parents were enrolled in the trial. All parents completed the questionnaires and 53 patients above 12 years old also completed the self-reported versions. Parents' and patients' responses were analysed separately and results were very similar. Dysfunction according to the HS-FOCUS and the CHAQ was most pronounced in the physical function domains. Very low scores were reported in the Self Esteem and Family Cohesion domains in the CHQ and HUI3 disutility values indicated a moderate impact. Scores reported by patients and their parents were consistently lower than scores in the other paediatric populations identified (except the parent-reported Behaviour score); and considerably lower than normative values.\n\n\nCONCLUSIONS\nThis study describes the impact on HRQL in patients with MPS II and provides a broader context by comparing it with that of other chronic paediatric diseases. Physical function and the ability to perform day-to-day activities were the most affected areas and a considerable impact on the psychological aspects of patients' HRQL was also found, with a higher level of impairment across most dimensions (particularly Pain and Self Esteem) than that of other paediatric populations. Such humanistic data provide increasingly important support for establishing priorities for health care spending, and as a component of health economic analysis.",
"title": ""
},
{
"docid": "bcb74bb78276a530a73cc2cf918bf2d5",
"text": "Manufacturing managers face increasing pressure to reduce inventories across the supply chain. However, in complex supply chains, it is not always obviouswhere to hold safety stock to minimize inventory costs and provide a high level of service to the final customer. In this paper we develop a framework for modeling strategic safety stock in a supply chain that is subject to demand or forecast uncertainty. Key assumptions are that we can model the supply chain as a network, that each stage in the supply chain operates with a periodic-review base-stock policy, that demand is bounded, and that there is a guaranteed service time between every stage and its customers. We develop an optimization algorithm for the placement of strategic safety stock for supply chains that can be modeled as spanning trees. Our assumptions allow us to capture the stochastic nature of the problem and formulate it as a deterministic optimization. As a partial validation of the model, we describe its successful application by product flow teams at Eastman Kodak. We discuss how these flow teams have used the model to reduce finished goods inventory, target cycle time reduction efforts, and determine component inventories. We conclude with a list of needs to enhance the utility of the model. (Base-Stock Policy; Dynamic Programming Application;Multi-echelon Inventory System;Multi-Stage Supply-Chain Application; Safety Stock Optimization)",
"title": ""
},
{
"docid": "7c86385f69d3f011d5523a2f0e451ce5",
"text": "INTRODUCTION\nA dynamometer employing a stabilization procedure (lumbar extension machine, MedX, Ocala, FL) is effective in improving strength and reducing symptoms of low back pain (LBP), and researchers have hypothesized that this effectiveness is due to the pelvic stabilization. However, effects of the dynamometer with and without pelvic stabilization on LBP have not been compared: This was the aim of the present study.\n\n\nMETHODS\nForty-two chronic LBP patients were randomly assigned to a lumbar extension training with pelvic stabilization group (STAB; n=15), a lumbar extension without pelvic stabilization group (NO-STAB; n=15) and a control group (n=12). STAB and NO-STAB participants completed one weekly session of dynamic variable resistance exercise (one set of 8-12 repetitions to fatigue) on the lumbar extension machine (with or without pelvic stabilization) for 12 weeks. Pre- and post-test measures of self-reported LBP (101-point visual analog scale; pre-test mean of 25), related disability (Oswestry disability index; pre-test mean of 34) and lumbar strength were taken.\n\n\nRESULTS\nAfter the exercise program, the STAB group increased significantly in lumbar strength at all joint angles, and decreased significantly in visual analogue and Oswestry scores. However, there were no significant changes in these variables in the NO-STAB and control groups.\n\n\nDISCUSSION\nIsolated lumbar extension exercise is very effective in reducing LBP in chronic patients. However, when the pelvis is not stabilized, otherwise identical exercises appear ineffective in reducing LBP.",
"title": ""
},
{
"docid": "015976c8877fa6561c6dbe4dcf58ee7c",
"text": "Classic sparse representation for classification (SRC) method fails to incorporate the label information of training images, and meanwhile has a poor scalability due to the expensive computation for `1 norm. In this paper, we propose a novel subspace sparse coding method with utilizing label information to effectively classify the images in the subspace. Our new approach unifies the tasks of dimension reduction and supervised sparse vector learning, by simultaneously preserving the data sparse structure and meanwhile seeking the optimal projection direction in the training stage, therefore accelerates the classification process in the test stage. Our method achieves both flat and structured sparsity for the vector representations, therefore making our framework more discriminative during the subspace learning and subsequent classification. The empirical results on 4 benchmark data sets demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "46b9eb7c41663965e1109172d3ec697d",
"text": "In this paper, the interest is on cases where assessing the goodness of a solution for the problem is costly or hazardous to construct or extremely computationally intensive to compute. We label such category of problems as “expensive” in the present study. In the context of multi-objective evolutionary optimizations, the challenge amplifies, since multiple criteria assessments, each defined by an “expensive” objective is necessary and it is desirable to obtain the Pareto-optimal solution set under a limited resource budget. To address this issue, we propose a Pareto Rank Learning scheme that predicts the Pareto front rank of the offspring in MOEAs, in place of the “expensive” objectives when assessing the population of solutions. Experimental study on 19 standard multi-objective benchmark test problems concludes that Pareto rank learning enhanced MOEA led to significant speedup over the state-of-the-art NSGA-II, MOEA/D and SPEA2.",
"title": ""
},
{
"docid": "18acfc082ec5aebbd83c3dc047c76c92",
"text": "Although near-threshold voltage (NTV) operation is an attractive means of achieving high energy efficiency, it can degrade the circuit stability of static random access memory (SRAM) cells. This paper proposes an NTV 7T SRAM cell in a 14 nm FinFET technology to eliminate read disturbance by disconnecting the path from the bit-line to the cross-coupled inverter pair using the transmission gate. In the proposed 7T SRAM cell, the half-select issue is resolved, meaning that no write-back operation is required. A folded-column structure is applied to the proposed 7T SRAM cell to reduce the read access time and energy consumption. To reduce the standby power, the proposed 7T SRAM cell uses only a single bit-line for both read and write operations. To achieve proper “1” writing operation with a single bit-line, a two-phase approach is proposed. Compared to the conventional 8T SRAM cell, the proposed 7T SRAM cell improves the read access time, energy, and standby power by 13%, 42%, and 23%, respectively, with a 3% smaller cell area.",
"title": ""
},
{
"docid": "b6f04270b265cd5a0bb7d0f9542168fb",
"text": "This paper presents design and manufacturing procedure of a tele-operative rescue robot. First, the general task to be performed by such a robot is defined, and variant kinematic mechanisms to form the basic structure of the robot are discussed. Choosing an appropriate mechanism, geometric dimensions, and mass properties are detailed to develop a dynamics model for the system. Next, the strength of each component is analyzed to finalize its shape. To complete the design procedure, Patran/Nastran was used to apply the finite element method for strength analysis of complicated parts. Also, ADAMS was used to model the mechanisms, where 3D sketch of each component of the robot was generated by means of Solidworks, and several sets of equations governing the dimensions of system were solved using Matlab. Finally, the components are fabricated and assembled together with controlling hardware. Two main processors are used within the control system of the robot. The operator's PC as the master processor and the laptop installed on the robot as the slave processor. The performance of the system was demonstrated in Rescue robot league of RoboCup 2005 in Osaka (Japan) and achieved the 2nd best design award",
"title": ""
},
{
"docid": "fb97b11eba38f84f38b473a09119162a",
"text": "We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS ’15 ). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.",
"title": ""
},
{
"docid": "1b0bccf41db4d323ac585d46475ce6f1",
"text": "For electric power transmission, high voltage overhead power lines play an important role as the costs for power transmission are comparatively low. However, the environmental conditions in many geographical regions can change over a wide range. Due to the high voltages, adequate distances between the conductors and objects in the environment have to be ensured for safety reasons. However, sag of the conductors (e.g. due to temperature variations or aging, icing of conductors as a result of extreme weather conditions) may increase safety margins and limit the operability of these power lines. Heavy loads due to icing or vibrations excited by winds increase the risk of line breakage. With online condition monitoring of power lines, critical states or states with increased wear for the conductor may be detected early and appropriate counter measures can be applied. In this paper we investigate possibilities for monitoring devices that are directly mounted onto a conductor. It is demonstrated that such a device can be powered from the electric field around the conductor and that electronic equipment can be protected from the strong electric and magnetic fields as well as transient signals due to partial discharge events.",
"title": ""
},
{
"docid": "03be8a60e1285d62c34b982ddf1bcf58",
"text": "A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "3e0dd3cf428074f21aaf202342003554",
"text": "Despite significant recent work, purely unsupervised techniques for part-of-speech (POS) tagging have not achieved useful accuracies required by many language processing tasks. Use of parallel text between resource-rich and resource-poor languages is one source of weak supervision that significantly improves accuracy. However, parallel text is not always available and techniques for using it require multiple complex algorithmic steps. In this paper we show that we can build POS-taggers exceeding state-of-the-art bilingual methods by using simple hidden Markov models and a freely available and naturally growing resource, the Wiktionary. Across eight languages for which we have labeled data to evaluate results, we achieve accuracy that significantly exceeds best unsupervised and parallel text methods. We achieve highest accuracy reported for several languages and show that our approach yields better out-of-domain taggers than those trained using fully supervised Penn Treebank.",
"title": ""
},
{
"docid": "52a01a3bb4122e313c3146363b3fb954",
"text": "We demonstrate how movements of multiple people or objects within a building can be displayed on a network representation of the building, where nodes are rooms and edges are doors. Our representation shows the direction of movements between rooms and the order in which rooms are visited, while avoiding occlusion or overplotting when there are repeated visits or multiple moving people or objects. We further propose the use of a hybrid visualization that mixes geospatial and topological (network-based) representations, enabling focus-in-context and multi-focal visualizations. An experimental comparison found that the topological representation was significantly faster than the purely geospatial representation for three out of four tasks.",
"title": ""
},
{
"docid": "6ee98121ecffa66c2bf390db70c15e09",
"text": "Fragment structure should find its application in acquiring high isolation between multipleinput multiple-output (MIMO) antennas. By gridding a design space into fragment cells, a fragmenttype isolation structure can be constructed by metalizing some of the fragment cells. For MIMO isolation design, cells to be metalized can be selected by optimization searching scheme with objectives such as isolation, return losses, and even radiation patterns of MIMO antennas. Due to the flexibility of fragment-type isolation structure, fragment-type structure has potentials to yield isolation higher than canonical isolation structures. In this paper, multi-objective evolutionary algorithm based on decomposition combined with genetic operators (MOEA/D-GO) is applied to design fragment-type isolation structures for MIMO patch antennas and MIMO PIFAs. It is demonstrated that isolation can be improved to different extents by using fragment-type isolation design. Some technique aspects related to the fragment-type isolation design, such as effects of fragment cell size, design space, density of metal cells, and efficiency consideration, are further discussed.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] |
scidocsrr
|
86d94bdac98aff3560348e80d34cb8c7
|
A bug Mining tool to identify and analyze security bugs using Naive Bayes and TF-IDF
|
[
{
"docid": "9858c1f5045c402213c5ce82c6d732f4",
"text": "Bug triage, deciding what to do with an incoming bug report, is taking up increasing amount of developer resources in large open-source projects. In this paper, we propose to apply machine learning techniques to assist in bug triage by using text categorization to predict the developer that should work on the bug based on the bug’s description. We demonstrate our approach on a collection of 15,859 bug reports from a large open-source project. Our evaluation shows that our prototype, using supervised Bayesian learning, can correctly predict 30% of the report assignments to",
"title": ""
}
] |
[
{
"docid": "a0c1f145f423052b6e8059c5849d3e34",
"text": "Improved methods of assessment and research design have established a robust and causal association between stressful life events and major depressive episodes. The chapter reviews these developments briefly and attempts to identify gaps in the field and new directions in recent research. There are notable shortcomings in several important topics: measurement and evaluation of chronic stress and depression; exploration of potentially different processes of stress and depression associated with first-onset versus recurrent episodes; possible gender differences in exposure and reactivity to stressors; testing kindling/sensitization processes; longitudinal tests of diathesis-stress models; and understanding biological stress processes associated with naturally occurring stress and depressive outcomes. There is growing interest in moving away from unidirectional models of the stress-depression association, toward recognition of the effects of contexts and personal characteristics on the occurrence of stressors, and on the likelihood of progressive and dynamic relationships between stress and depression over time-including effects of childhood and lifetime stress exposure on later reactivity to stress.",
"title": ""
},
{
"docid": "b33b10f3b6720b1bec3a030f236ac16c",
"text": "In this paper, we present a unified model for the automatic induction of word senses from text, and the subsequent disambiguation of particular word instances using the automatically extracted sense inventory. The induction step and the disambiguation step are based on the same principle: words and contexts are mapped to a limited number of topical dimensions in a latent semantic word space. The intuition is that a particular sense is associated with a particular topic, so that different senses can be discriminated through their association with particular topical dimensions; in a similar vein, a particular instance of a word can be disambiguated by determining its most important topical dimensions. The model is evaluated on the SEMEVAL-2010 word sense induction and disambiguation task, on which it reaches stateof-the-art results.",
"title": ""
},
{
"docid": "06a53629ea61545f73435697c038050d",
"text": "Text segmentation is an important problem in document analysis related applications. We address the problem of classifying connected components of a document image as text or non-text. Inspired from previous works in the literature, besides common size and shape related features extracted from the components, we also consider component images, without and with context information, as inputs of the classifiers. Muli-layer perceptrons and convolutional neural networks are used to classify the components. High precision and recall is obtained with respect to both text and non-text components.",
"title": ""
},
{
"docid": "bfa659ff24af7c319702a6a8c0c7dca3",
"text": "In this letter, a grounded coplanar waveguide-to-microstrip (GCPW-to-MS) transition without via holes is presented. The transition is designed on a PET® substrate and fabricated using inkjet printing technology. To our knowledge, fabrication of transitions using inkjet printing technology has not been reported in the literature. The simulations have been performed using HFSS® software and the measurements have been carried out using a Vector Network Analyzer on a broad frequency band from 40 to 85 GHz. The effect of varying several geometrical parameters of the GCPW-to-MS on the electromagnetic response is also presented. The results obtained demonstrate good characteristics of the insertion loss better than 1.5 dB, and return loss larger than 10 dB in the V-band (50-75 GHz). Such transitions are suitable for characterization of microwave components built on different flexible substrates.",
"title": ""
},
{
"docid": "8457f78251f9f9dd802603e646a9c1ce",
"text": "OBJECTIVE\nThe safest and most effective conservative treatment for patients with lumbar disc herniation (LDH) has not been established. The purpose of this study was to evaluate the effect of lumbar spine stabilization exercise (LSSE) and general exercise (GE) on pain intensity and functional capacity in young male patients with LDH.\n\n\nMETHODS\nSixty-three young male adults aged 20 to 29 years with the diagnosis of LDH were enrolled and divided into an LSSE group (n=30) and a GE group (n=33). Patients in both groups received low-power laser (LPL) therapy during the first week of the onset of LDH. Patients in the GE group underwent a GE program. Patients in the LSSE group followed an LSSE program for 3 months. All of the patients were subjected to pain intensity and functional capacity evaluations four times: at pre-and post-LPL therapy, and at 3 months and 1 year post-exercise. Pain intensity of the lower back and legs was evaluated with the visual analogue scale (VAS), and functional capacity was evaluated with the Oswestry Disability Index (ODI).\n\n\nRESULTS\nBoth groups showed a significant reduction in VAS and ODI scores at 3 and 12 months post-exercise compared with before treatment (P<0.001). The LSSE group showed a significant reduction in the average score of the VAS for low back pain (P=0.012) and the ODI (P=0.003) at 12 months post-exercise compared with the GE group.\n\n\nCONCLUSIONS\nLSSE and GE are considered as effective interventions for young male patients with LDH. Moreover, LSSE is more effective than GE, and physical therapy, such as LPL, is required during acute LDH.",
"title": ""
},
{
"docid": "fb581f0d3db5dcd7e2e2b5474c5812f1",
"text": "Sequence-to-Sequence (seq2seq) models have become overwhelmingly popular in building end-to-end trainable dialogue systems. Though highly efficient in learning the backbone of human-computer communications, they suffer from the problem of strongly favoring short generic responses. In this paper, we argue that a good response should smoothly connect both the preceding dialogue history and the following conversations. We strengthen this connection through mutual information maximization. To sidestep the nondifferentiability of discrete natural language tokens, we introduce an auxiliary continuous code space and map such code space to a learnable prior distribution for generation purpose. Experiments on two dialogue datasets validate the effectiveness of our model, where the generated responses are closely related to the dialogue context and lead to more interactive conversations.",
"title": ""
},
{
"docid": "43345e1f3205b57b5916a9c2ab1fdfb2",
"text": "The recently introduced family of fourth generation eGaN® FET power devices provides significant improvements in electrical performance figures of merit, reductions in device onresistance, and larger die, enabling improved performance in high frequency, high current applications. These new devices provide a path to approximately double the power density of brick-type standard converters. This paper describes the development of an eighth-brick (Ebrick) demonstration converter which uses the latest generation eGaN FETs. This converter is capable of output power greater than 500 W with an output of 12 V and 42 A and achieves a peak efficiency of 96.7% with 52 V input voltage.",
"title": ""
},
{
"docid": "7f6f26ac42f8f637415a45afc94daa0f",
"text": "We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning. In particular, training a neural network using synthetic data can be viewed as learning a proposal distribution generator for approximate inference in the synthetic-data generative model. We demonstrate this connection in a recognition task where we develop a novel Captcha-breaking architecture and train it using synthetic data, demonstrating both state-of-the-art performance and a way of computing task-specific posterior uncertainty. Using a neural network trained this way, we also demonstrate successful breaking of real-world Captchas currently used by Facebook and Wikipedia. Reasoning from these empirical results and drawing connections with Bayesian modeling, we discuss the robustness of synthetic data results and suggest important considerations for ensuring good neural network generalization when training with synthetic data.",
"title": ""
},
{
"docid": "6643797b32fa04bc652940188c3c6e0c",
"text": "In neural text generation such as neural machine translation, summarization, and image captioning, beam search is widely used to improve the output text quality. However, in the neural generation setting, hypotheses can finish in different steps, which makes it difficult to decide when to end beam search to ensure optimality. We propose a provably optimal beam search algorithm that will always return the optimal-score complete hypothesis (modulo beam size), and finish as soon as the optimality is established (finishing no later than the baseline). To counter neural generation’s tendency for shorter hypotheses, we also introduce a bounded length reward mechanism which allows a modified version of our beam search algorithm to remain optimal. Experiments on neural machine translation demonstrate that our principled beam search algorithm leads to improvement in BLEU score over previously proposed alternatives.",
"title": ""
},
{
"docid": "838bd8a38f9d67d768a34183c72da07d",
"text": "Jacobsen syndrome (JS), a rare disorder with multiple dysmorphic features, is caused by the terminal deletion of chromosome 11q. Typical features include mild to moderate psychomotor retardation, trigonocephaly, facial dysmorphism, cardiac defects, and thrombocytopenia, though none of these features are invariably present. The estimated occurrence of JS is about 1/100,000 births. The female/male ratio is 2:1. The patient admitted to our clinic at 3.5 years of age with a cardiac murmur and facial anomalies. Facial anomalies included trigonocephaly with bulging forehead, hypertelorism, telecanthus, downward slanting palpebral fissures, and a carp-shaped mouth. The patient also had strabismus. An echocardiogram demonstrated perimembranous aneurysmatic ventricular septal defect and a secundum atrial defect. The patient was <3rd percentile for height and weight and showed some developmental delay. Magnetic resonance imaging (MRI) showed hyperintensive gliotic signal changes in periventricular cerebral white matter, and leukodystrophy was suspected. Chromosomal analysis of the patient showed terminal deletion of chromosome 11. The karyotype was designated 46, XX, del(11) (q24.1). A review of published reports shows that the severity of the observed clinical abnormalities in patients with JS is not clearly correlated with the extent of the deletion. Most of the patients with JS had short stature, and some of them had documented growth hormone deficiency, or central or primary hypothyroidism. In patients with the classical phenotype, the diagnosis is suspected on the basis of clinical findings: intellectual disability, facial dysmorphic features and thrombocytopenia. The diagnosis must be confirmed by cytogenetic analysis. For patients who survive the neonatal period and infancy, the life expectancy remains unknown. In this report, we describe a patient with the clinical features of JS without thrombocytopenia. To our knowledge, this is the first case reported from Turkey.",
"title": ""
},
{
"docid": "d0641206af1afeab7143fa82d56ba727",
"text": "This paper outlines possible evolution trends of e-learning, supported by most recent advancements in the World Wide Web. Specifically, we consider a situation in which the Semantic Web technology and tools are widely adopted, and fully integrated within a context of applications exploiting the Internet of Things paradigm. Such a scenario will dramatically impact on learning activities, as well as on teaching strategies and instructional design methodology. In particular, the models characterized by learning pervasiveness and interactivity will be greatly empowered.",
"title": ""
},
{
"docid": "fa5a07a89f8b52759585ea20124fb3cc",
"text": "Polycystic ovary syndrome (PCOS) is considered as a highly heterogeneous and complex disease. Dimethyldiguanide (DMBG) is widely used to improve the reproductive dysfunction in women with PCOS. However, the precise mechanism by which DMBG exerts its benefical effect on PCOS remains largely unknown. The present study was designed to explore the effects of DMBG on the changes of oxidative stress and the activation of nucleotide leukin rich polypeptide 3 (NLRP3) inflammasome in the ovaries during the development and treatment of PCOS. A letrozole-induced rat PCOS model was developed. The inflammatory status was examined by analyzing the serum high sensitive C-reactive protein (hsCRP) levels in ras. We found that DMBG treatment rescued PCOS rats, which is associated with the reduced chronic low grade inflammation in these rats. In PCOS rats, the NLRP3 and the adaptor protein apoptosis-associated speck-like protein (ASC) mRNA levels, caspase-1 activation, and IL-1β production were unregulated, which was markedly attenuated by DMBG treatment. Moreover, oxidative stress was enhanced in PCOS rats as shown by increased lipid peroxidation (LPO) and activity of superoxide dismutase (SOD) and catalase. DMBG significantly decreased LPO, while it had no effects on SOD and catalase activities. Together, these results indicate that DMBG treatment may rescue PCOS rats by suppressing oxidative stress and NLRP3 inflammasome activation in PCOS ovaries.",
"title": ""
},
{
"docid": "686593aca763bf003219dc1faf05cd36",
"text": "This chapter examines positive teacher-student relationships, as seen through a variety of psychological models and provides recommendations for schools and teachers.",
"title": ""
},
{
"docid": "2e7bc1cc2f4be94ad0e4bce072a9f98a",
"text": "Glycosylation plays an important role in ensuring the proper structure and function of most biotherapeutic proteins. Even small changes in glycan composition, structure, or location can have a drastic impact on drug safety and efficacy. Recently, glycosylation has become the subject of increased focus as biopharmaceutical companies rush to create not only biosimilars, but also biobetters based on existing biotherapeutic proteins. Against this backdrop of ongoing biopharmaceutical innovation, updated methods for accurate and detailed analysis of protein glycosylation are critical for biopharmaceutical companies and government regulatory agencies alike. This review summarizes current methods of characterizing biopharmaceutical glycosylation, including compositional mass profiling, isomer-specific profiling and structural elucidation by MS and hyphenated techniques.",
"title": ""
},
{
"docid": "caa41494c6e6dc8788da6d2041084188",
"text": "In this paper the coverage and capacity of SigFox, LoRa, GPRS, and NB-IoT is compared using a real site deployment covering 8000 km2 in Northern Denmark. Using the existing Telenor cellular site grid it is shown that the four technologies have more than 99 % outdoor coverage, while GPRS is challenged for indoor coverage. Furthermore, the study analyzes the capacity of the four technologies assuming a traffic growth from 1 to 10 IoT device per user. The conclusion is that the 95 %-tile uplink failure rate for outdoor users is below 5 % for all technologies. For indoor users only NB-IoT provides uplink and downlink connectivity with less than 5 % failure rate, while SigFox is able to provide an unacknowledged uplink data service with about 12 % failure rate. Both GPRS and LoRa struggle to provide sufficient indoor coverage and capacity.",
"title": ""
},
{
"docid": "47dc81932a0ed4c56b945e49c5105c34",
"text": "In this paper, the feature selection problem was formulated as a multi-objective optimization problem, and new criteria were proposed to fulfill the goal. Foremost, data were pre-processed with missing value replacement scheme, re-sampling procedure, data type transformation procedure, and min-max normalization procedure. After that a wide variety of classifiers and feature selection methods were conducted and evaluated. Finally, the paper presented comprehensive experiments to show the relative performance of the classification tasks. The experimental results revealed the success of proposed methods in credit approval data. In addition, the numeric results also provide guides in selection of feature selection methods and classifiers in the knowledge discovery process. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7edaef142ecf8a3825affc09ad10d73a",
"text": "Internet of Things (IoT) is a network of sensors, actuators, mobile and wearable devices, simply things that have processing and communication modules and can connect to the Internet. In a few years time, billions of such things will start serving in many fields within the concept of IoT. Self-configuration, autonomous device addition, Internet connection and resource limitation features of IoT causes it to be highly prone to the attacks. Denial of Service (DoS) attacks which have been targeting the communication networks for years, will be the most dangerous threats to IoT networks. This study aims to analyze and classify the DoS attacks that may target the IoT environments. In addition to this, the systems that try to detect and mitigate the DoS attacks to IoT will be evaluated.",
"title": ""
},
{
"docid": "5dd790f34fec2f4adc52971c39e55d6b",
"text": "Although within SDN community, the notion of logically centralized network control is well understood and agreed upon, many different approaches exist on how one should deliver such a logically centralized view to multiple distributed controller instances. In this paper, we survey and investigate those approaches. We discover that we can classify the methods into several design choices that are trending among SDN adopters. Each design choice may influence several SDN issues such as scalability, robustness, consistency, and privacy. Thus, we further analyze the pros and cons of each model regarding these matters. We conclude that each design begets some characteristics. One may excel in resolving one issue but perform poor in another. We also present which design combinations one should pick to build distributed controller that is scalable, robust, consistent",
"title": ""
},
{
"docid": "54af3c39dba9aafd5b638d284fd04345",
"text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).",
"title": ""
},
{
"docid": "333645d1c405ae51aafe2b236c8fa3fd",
"text": "Proposes a new method of personal recognition based on footprints. In this method, an input pair of raw footprints is normalized, both in direction and in position for robustness image-matching between the input pair of footprints and the pair of registered footprints. In addition to the Euclidean distance between them, the geometric information of the input footprint is used prior to the normalization, i.e., directional and positional information. In the experiment, the pressure distribution of the footprint was measured with a pressure-sensing mat. Ten volunteers contributed footprints for testing the proposed method. The recognition rate was 30.45% without any normalization (i.e., raw image), and 85.00% with the authors' method.",
"title": ""
}
] |
scidocsrr
|
72232cd6e62cbe82e6b50e53430ba654
|
Analysis of quadratic dickson based envelope detectors for IoE sensor node applications
|
[
{
"docid": "e30cedcb4cb99c4c3b2743c5359cf823",
"text": "This paper presents a 116nW wake-up radio complete with crystal reference, interference compensation, and baseband processing, such that a selectable 31-bit code is required to toggle a wake-up signal. The front-end operates over a broad frequency range, tuned by an off-chip band-select filter and matching network, and is demonstrated in the 402-405MHz MICS band and the 915MHz and 2.4GHz ISM bands with sensitivities of -45.5dBm, -43.4dBm, and -43.2dBm, respectively. Additionally, the baseband processor implements automatic threshold feedback to detect the presence of interferers and dynamically adjust the receiver's sensitivity, mitigating the jamming problem inherent to previous energy-detection wake-up radios. The wake-up radio has a raw OOK chip-rate of 12.5kbps, an active area of 0.35mm2 and operates using a 1.2V supply for the crystal reference and RF demodulation, and a 0.5V supply for subthreshold baseband processing.",
"title": ""
}
] |
[
{
"docid": "4b7e71b412770cbfe059646159ec66ca",
"text": "We present empirical evidence to demonstrate that there is little or no difference between the Java Virtual Machine and the .NET Common Language Runtime, as regards the compilation and execution of object-oriented programs. Then we give details of a case study that proves the superiority of the Common Language Runtime as a target for imperative programming language compilers (in particular GCC).",
"title": ""
},
{
"docid": "702470e5d2f64a2c987a082e22f544db",
"text": "Deep learning (DL) advances state-of-the-art reinforcement learning (RL), by incorporating deep neural networks in learning representations from the input to RL. However, the conventional deep neural network architecture is limited in learning representations for multi-task RL (MT-RL), as multiple tasks can refer to different kinds of representations. In this paper, we thus propose a novel deep neural network architecture, namely generalization tower network (GTN), which can achieve MT-RL within a single learned model. Specifically, the architecture of GTN is composed of both horizontal and vertical streams. In our GTN architecture, horizontal streams are used to learn representation shared in similar tasks. In contrast, the vertical streams are introduced to be more suitable for handling diverse tasks, which encodes hierarchical shared knowledge of these tasks. The effectiveness of the introduced vertical stream is validated by experimental results. Experimental results further verify that our GTN architecture is able to advance the state-of-the-art MT-RL, via being tested on 51 Atari games.",
"title": ""
},
{
"docid": "713ade80a6c2e0164a0d6fe6ef07be37",
"text": "We review recent work on the role of intrinsic amygdala networks in the regulation of classically conditioned defensive behaviors, commonly known as conditioned fear. These new developments highlight how conditioned fear depends on far more complex networks than initially envisioned. Indeed, multiple parallel inhibitory and excitatory circuits are differentially recruited during the expression versus extinction of conditioned fear. Moreover, shifts between expression and extinction circuits involve coordinated interactions with different regions of the medial prefrontal cortex. However, key areas of uncertainty remain, particularly with respect to the connectivity of the different cell types. Filling these gaps in our knowledge is important because much evidence indicates that human anxiety disorders results from an abnormal regulation of the networks supporting fear learning.",
"title": ""
},
{
"docid": "8303d9d6c4abee81bf803240aa929747",
"text": "Kaposi's sarcoma (KS) is a multifocal hemorrhagic sarcoma that occurs primarily on the extremities. KS limited to the penis is rare and a well-recognized manifestation of acquired immune deficiency syndrome (AIDS). However, KS confined to the penis is extraordinary in human immunodeficiency virus (HIV)-negative patients. We present the case of a 68-year-old man with a dark reddish ulcerated nodule on the penile skin, which was reported as a nodular stage of KS. We detected no evidence of immunosuppression or AIDS or systemic involvements in further evaluations. In his past medical history, the patient had undergone three transurethral resections of bladder tumors due to urothelial cell carcinoma since 2000 and total gastrectomy, splenectomy, and adjuvant fluorouracil/cisplatin chemotherapy for 7 months due to advanced gastric carcinoma in 2005. The patient was circumcised and has had no recurrence for 2 years.",
"title": ""
},
{
"docid": "d78ac882d6a24d674fc3c60245fd04dc",
"text": "This paper presents the development of a portable, Non-Invasive device for monitoring patients’ blood Insulin and Glucose concentration. It is based on Near-Infrared(NIR) Spectroscopy. Perkin-Elmer Lambda 750 spectrometer was used to study the absorbance of Insulin and glucose at different wavelengths of NIR region. NIR LED (Light Emitting diode) of appropriate wavelengths were used to construct a finger clip. A separate Printed Circuit Board (PCB) was constructed; that connects to the finger clip and acquires the Photoplethysmography (PPG) signal at three separate NIR wavelengths. The PPG signal can be used to detect the concentartion of Glucose and Insulin in blood. The PCB has a WiFi module interfaced within it; that supports Internet connectivity and uploads the time series data of insulin and glucose to a server. The data can be viewed in a graphical format by authorized users. This can be used by a medical practitioner to understand whether the data shows an increasing or decreasing trend and hence, will enable to prognosticate the approaching critical condition of the patient much before the critical condition actually occurs.",
"title": ""
},
{
"docid": "941cd6b47980ff8539b7124a48f160e5",
"text": "Question Answering for complex questions is often modelled as a graph construction or traversal task, where a solver must build or traverse a graph of facts that answer and explain a given question. This “multi-hop” inference has been shown to be extremely challenging, with few models able to aggregate more than two facts before being overwhelmed by “semantic drift”, or the tendency for long chains of facts to quickly drift off topic. This is a major barrier to current inference models, as even elementary science questions require an average of 4 to 6 facts to answer and explain. In this work we empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three freetext corpora (including study guides and Simple Wikipedia). We demonstrate semantic drift tends to be high and aggregation quality low, at between 0.04% and 3%, and highlight scenarios that maximize the likelihood of meaningfully combining information.",
"title": ""
},
{
"docid": "f0c5f3cce1a0538e3c177ef00eab0b75",
"text": "Clickstream data are defined as the electronic record of Internet usage collected by Web servers or third-party services. The authors discuss the nature of clickstream data, noting key strengths and limitations of these data for research in marketing. The paper reviews major developments from the analysis of these data, covering advances in understanding (1) browsing and site usage behavior on the Internet, (2) the Internet’s role and efficacy as a new medium for advertising and persuasion, and (3) shopping behavior on the Internet (i.e., electronic commerce). The authors outline opportunities for new research and highlight several emerging areas likely to grow in future importance. Inherent limitations of clickstream data for understanding and predicting the behavior of Internet users or researching marketing phenomena are also discussed.",
"title": ""
},
{
"docid": "4033e07d16ad317b442be476e608e48d",
"text": "In this research, a gas sensor using double split-ring resonator (DSRR) incorporated with conducting polymer (CP) is proposed at microwave frequencies (Ku-band). The DSRR fabricated on printed circuit board (PCB) is excited by a high-impedance microstrip line, and the CP is coated inside of an inner circle of the DSRR. Electrical characteristics of the CP can be deviated by an interaction between CP and a target gas, and then this deviation of electrical characteristic is demonstrated by S21 frequency response of the DSRR. To examine the performance of the proposed sensor, 100 ppm ethanol (C2H5OH) gas is exposed at room temperature. According to the measured result, the S21 resonance frequency of the DSRR is shifted by 220 MHz and simultaneously, the resonance amplitude is changed by 0.79 dB level. It is clearly found that the DSRR with CP material can be a good candidate for a sensitive gas sensor operating at microwave frequencies.",
"title": ""
},
{
"docid": "e219c7e4078a1577f0a515494cadb45f",
"text": "Deep Convolutional Neuronal Networks (DCNNs) are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g., lighting, background, and detail) are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.",
"title": ""
},
{
"docid": "eff89cfd6056509c13eb8ce8463f8d30",
"text": "Bioactivity of oregano methanolic extracts and essential oils is well known. Nonetheless, reports using aqueous extracts are scarce, mainly decoction or infusion preparations used for therapeutic applications. Herein, the antioxidant and antibacterial activities, and phenolic compounds of the infusion, decoction and hydroalcoholic extract of oregano were evaluated and compared. The antioxidant activity is related with phenolic compounds, mostly flavonoids, since decoction presented the highest concentration of flavonoids and total phenolic compounds, followed by infusion and hydroalcoholic extract. The samples were effective against gram-negative and gram-positive bacteria. It is important to address that the hydroalcoholic extract showed the highest efficacy against Escherichia coli. This study demonstrates that the decoction could be used for antioxidant purposes, while the hydroalcoholic extract could be incorporated in formulations for antimicrobial features. Moreover, the use of infusion/decoction can avoid the toxic effects showed by oregano essential oil, widely reported for its antioxidant and antimicrobial properties.",
"title": ""
},
{
"docid": "b2be764b8aa7ca302ec5f6c43f89fca2",
"text": "Origami, i.e. paper folding, is a powerful tool for geometrical constructions. In 1989, Humiaki Huzita introduced six folding operations based on aligning one or more combinations of points and lines [6]. Jacques Justin, in his paper of the same proceedings, also presented a list of seven distinct operations [9]. His list included, without literal description, one extra operation not in Huzita's paper. Justin's work was written in French, and was somehow unknown among researchers. This led Hatori [5] to 'discover' the same seventh operation in 2001. Alperin and Lang in 2006 [1] showed, by exhaustive enumeration of combinations of superpositions of points and lines involved, that the seven operations are complete combinations of the alignments. Huzita did not call his list of operations axioms. However, over years, the term Huzita axioms, or Huzita-Justin or Huzita-Hatori axioms, has been widely used in origami community. From logical point of view, it is not accurate to call Huzita's original statements of folding operations as axioms, because they are not always true in plane Euclidean geometry. In this paper, we present precise statements of the folding operations, by which naming them 'axioms' is logically valid, and we make some notes about the work of Huzita and Justin.",
"title": ""
},
{
"docid": "d083e8ebddf43bcd8f1efd05aa708658",
"text": "Even a casual reading of the extensive literature on student development in higher education can create confusion and perplexity. One finds not only that the problems being studied are highly diverse but also that investigators who claim to be studying the same problem frequently do not look at the same variables or employ the same methodologies. And even when they are investigating the same variables, different investigators may use completely different terms to describe and discuss these variables. My own interest in articulating a theory of student development is partly practical—I would like to bring some order into the chaos of the literature—and partly self-protective. I and increasingly bewildered by the muddle of f indings that have emerged from my own research in student development, research that I have been engaged in for more than 20 years. The theory of student involvement that I describe in this article appeals to me for several reasons. First, it is simple: I have not needed to draw a maze consisting of dozens of boxes interconnected by two-headed arrows to explain the basic elements of the theory to others. Second, the theory can explain most of the empirical knowledge about environmental influences on student development that researchers have gained over the years. Third, it is capable of embracing principles from such widely divergent sources as psychoanalysis and classical learning theory. Finally, this theory of student involvement can be used both by researchers to guide their investigation of student development—and by college administrators and",
"title": ""
},
{
"docid": "26827d9a84d4866438d69813dd3741b1",
"text": "In this study, we present an evaluation of using various methods for face recognition. As feature extracting techniques we benefit from wavelet decomposition and Eigenfaces method which is based on Principal Component Analysis (PCA). After generating feature vectors, distance classifier and Support Vector Machines (SVMs) are used for classification step. We examined the classification accuracy according to increasing dimension of training set, chosen feature extractor–classifier pairs and chosen kernel function for SVM classifier. As test set we used ORL face database which is known as a standard face database for face recognition applications including 400 images of 40 people. At the end of the overall separation task, we obtained the classification accuracy 98.1% with Wavelet–SVM approach for 240 image training set. As a special study of pattern recognition, face recognition has had crucial effects in daily life especially for security purposes. Face recognition task is actively being used at airports, employee entries , criminal detection systems, etc. For this task many methods have been proposed and tested. Most of these methods have trade off's like hardware requirements, time to update image database, time for feature extraction, response time. Generally face recognition methods are composed of a feature extractor (like PCA, Wavelet decomposer) to reduce the size of input and a classifier like Neural Networks, Support Vector Machines, Nearest Distance Classifiers to find the features which are most likely to be looked for. In this study, we chose wavelet decomposition and Eigenfaces method which is based on Principal Component Analysis (PCA) as main techniques for data reduction and feature extraction. PCA is an efficient and long term studied method to extract feature sets by creating a feature space. PCA also has low computation time which is an important advantage. On the other hand because of being a linear feature extraction method, PCA is inefficient especially when nonlinearities are present in the underlying relationships (Kursun & Favorov, 2004). Wavelet decomposition is a multilevel dimension reduction process that makes time–space–frequency analysis. Unlike Fourier transform, which provides only frequency analysis of signals, wavelet transforms provide time–frequency analysis, which is particularly useful for pattern recognition (Gorgel, Sertbas, Kilic, Ucan, & Osman, 2009). In this study, we used available 40 classes in the ORL face recognition dataset (ORL Database of Faces, 1994). Eigenfaces and Discrete Wavelet Transform are used for feature extractor. For the classification step, we consider Support Vector Machines (SVM) and nearest distance classification …",
"title": ""
},
{
"docid": "2801a7eea00bc4db7d6aacf71071de20",
"text": "Internet of Things (IoT) devices are rapidly becoming ubiquitous while IoT services are becoming pervasive. Their success has not gone unnoticed and the number of threats and attacks against IoT devices and services are on the increase as well. Cyber-attacks are not new to IoT, but as IoT will be deeply interwoven in our lives and societies, it is becoming necessary to step up and take cyber defense seriously. Hence, there is a real need to secure IoT, which has consequently resulted in a need to comprehensively understand the threats and attacks on IoT infrastructure. This paper is an attempt to classify threat types, besides analyze and characterize intruders and attacks facing IoT devices and services.",
"title": ""
},
{
"docid": "8261ce69652ba278f9154c364a1f558a",
"text": "Recently, the skill involved in playing and mastering video games has led to the professionalization of the activity in the form of ‘esports’ (electronic sports). The aim of the present paper was to review the main topics of psychological interest about esports and then to examine the similarities of esports to professional and problem gambling. As a result of a systematic literature search, eight studies were identified that had investigated three topics: (1) the process of becoming an esport player, (2) the characteristics of esport players such as mental skills and motivations, and (3) the motivations of esport spectators. These findings draw attention to the new research field of professional video game playing and provides some preliminary insight into the psychology of esports players. The paper also examines the similarities between esport players and professional gamblers (and more specifically poker players). It is suggested that future research should focus on esport players’ psychological vulnerability because some studies have begun to investigate the difference between problematic and professional gambling and this might provide insights into whether the playing of esports could also be potentially problematic for some players.",
"title": ""
},
{
"docid": "4ae9c4cc0c4e309e9e5c9533fb3cf3b5",
"text": "The electronics packages for many robot control systems have very similar requirements, yet are often redesigned for each custom application. To reduce wasted time and effort, the project presented in this paper (the Wireless Autonomous Robot Platform with Inertial Navigation and Guidance, WARP-WING) is intended to create a complete and easily customizable general purpose control system for miniature robotic systems, in particular micro air vehicles. In its default configuration, hardware designs, firmware, and software are all available to deliver an out-of-the-box robot control solution comprising 6 degree-of-freedom inertial sensors, a microprocessor, and wireless communication, along with general purpose input/output pins, serial ports, and control outputs for interfacing to additional sensors and actuators. The entire project is open source and a process is in place to enable modification of any component, allowing for easy adaptation to any need. WARPWING is already in use in a number of labs, with each research group contributing its expertise to enhance the platform and make such modifications available to others as well.",
"title": ""
},
{
"docid": "1fad899c589b70f65fac2cce2b814ffd",
"text": "This paper proposes a model and an architecture for designing intelligent tutoring system using Bayesian Networks. The design model of an intelligent tutoring system is directed towards the separation between the domain knowledge and the tutor shell. The architecture is composed by a user model, a knowledge base, an adaptation module, a pedagogical module and a presentation module. Bayesian Networks are used to assess user’s state of knowledge and preferences, in order to suggest pedagogical options and recommend future steps in the tutor. The proposed architecture is implemented in the Internet, enabling its use as an e-learning tool. An example of an intelligent tutoring system is shown for illustration purposes.",
"title": ""
},
{
"docid": "f80fbac1c3f3060f2e023a77df1e1532",
"text": "Context: Big Data Cybersecurity Analytics is increasingly becoming an important area of research and practice aimed at protecting networks, computers, and data from unauthorized access by analysing security event data using big data tools and technologies. Whilst a plethora of Big Data Cybersecurity Analytic Systems have been reported in the literature, there is a lack of a systematic and comprehensive review of the literature from an architectural perspective. Objective: This paper reports a systematic review aimed at identifying the most frequently reported quality attributes and architectural tactics for Big Data Cybersecurity Analytic Systems. Method: We used Systematic Literature Review (SLR) method for reviewing 74 primary studies selected using well-defined criteria. Results: Our findings are twofold: (i) identification of 12 most frequently reported quality attributes and the justification for their significance for Big Data Cybersecurity Analytic Systems; and (ii) identification and codification of 17 architectural tactics for addressing the quality attributes that are commonly associated with Big Data Cybersecurity Analytic systems. The identified tactics include six performance tactics, four accuracy tactics, two scalability tactics, three reliability tactics, and one security and usability tactic each. Conclusion: Our findings have revealed that (a) despite the significance of interoperability, modifiability, adaptability, generality, stealthiness, and privacy assurance, these quality attributes lack explicit architectural support in the literature (b) empirical investigation is required to evaluate the impact of codified architectural tactics (c) a good deal of research effort should be invested to explore the trade-offs and dependencies among the identified tactics (d) there is a general lack of effective collaboration between academia and industry for supporting the field of Big Data Cybersecurity Analytic Systems and (e) more research is required on the comparative analysis among big data processing frameworks (i.e., Hadoop, Spark, and Storm) when used for Big Data Cybersecurity Analytic Systems.",
"title": ""
},
{
"docid": "67d9006e1bd12d937995da20348511c0",
"text": "This paper presents the sensor network infrastructure for a home care system that allows long-term monitoring of physiological data and everyday activities. The aim of the proposed system is to allow the elderly to live longer in their home without compromising safety and ensuring the detection of health problems. The system offers the possibility of a virtual visit via a teleoperated robot. During the visit, physiological data and activities occurring during a period of time can be discussed. These data are collected from physiological sensors (e.g., temperature, blood pressure, glucose) and environmental sensors (e.g., motion, bed/chair occupancy, electrical usage). The system can also give alarms if sudden problems occur, like a fall, and warnings based on more long-term trends, such as the deterioration of health being detected. It has been implemented and tested in a test environment and has been deployed in six real homes for a year-long evaluation. The key contribution of the paper is the presentation of an implemented system for ambient assisted living (AAL) tested in a real environment, combining the acquisition of sensor data, a flexible and adaptable middleware compliant with the OSGistandard and a context recognition application. The system has been developed in a European project called GiraffPlus.",
"title": ""
}
] |
scidocsrr
|
062f7684afddc733806155de5506fbd2
|
Recurrent neural network training with dark knowledge transfer
|
[
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
},
{
"docid": "db433a01dd2a2fd80580ffac05601f70",
"text": "While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.",
"title": ""
},
{
"docid": "46fba65ad6ad888bb3908d75f0bcc029",
"text": "Deep neural network (DNN) obtains significant accuracy improvements on many speech recognition tasks and its power comes from the deep and wide network structure with a very large number of parameters. It becomes challenging when we deploy DNN on devices which have limited computational and storage resources. The common practice is to train a DNN with a small number of hidden nodes and a small senone set using the standard training process, leading to significant accuracy loss. In this study, we propose to better address these issues by utilizing the DNN output distribution. To learn a DNN with small number of hidden nodes, we minimize the Kullback–Leibler divergence between the output distributions of the small-size DNN and a standard large-size DNN by utilizing a large number of un-transcribed data. For better senone set generation, we cluster the senones in the large set into a small one by directly relating the clustering process to DNN parameters, as opposed to decoupling the senone generation and DNN training process in the standard training. Evaluated on a short message dictation task, the proposed two methods get 5.08% and 1.33% relative word error rate reduction from the standard training method, respectively.",
"title": ""
}
] |
[
{
"docid": "8b067b1115d4bc7c8656564bc6963d7b",
"text": "Sentence Function: Indicating the conversational purpose of speakers • Interrogative: Acquire further information from the user • Imperative: Make requests, instructions or invitations to elicit further information • Declarative: Make statements to state or explain something Response Generation Task with Specified Sentence Function • Global Control: Plan different types of words globally • Compatibility: Controllable sentence function + informative content",
"title": ""
},
{
"docid": "16c58710e1285a55d75f996c2816b9b0",
"text": "Face morphing is an effect that shows a transition from one face image to another face image smoothly. It has been widely used in various fields of work, such as animation, movie production, games, and mobile applications. Two types of methods have been used to conduct face morphing. Semi automatic mapping methods, which allow users to map corresponding pixels between two face images, can produce a smooth transition of result images. Mapping the corresponding pixel between two human face images is usually not trivial. Fully automatic methods have also been proposed for morphing between two images having similar face properties, where the results depend on the similarity of the input face images. In this project, we apply a critical point filter to determine facial features for automatically mapping the correspondence of the input face images. The critical point filters can be used to extract the main features of input face images, including color, position and edge of each facial component in the input images. An energy function is also proposed for mapping the corresponding pixels between pixels of the input face images. The experimental results show that position of each face component plays a more important role than the edge and color of the face. We can summarize that, using the critical point filter, the proposed method to generate face morphing can produce a smooth image transition with our adjusted weight function.",
"title": ""
},
{
"docid": "969b49b20271f2714ad96d739bf79f08",
"text": "Control of a robot manipulator in contact with the environment is usually conducted by the direct feedback control system using a force-torque sensor or the indirect impedance control scheme. Although these methods have been successfully applied to many applications, simultaneous control of force and position cannot be achieved. Furthermore, collision safety has been of primary concern in recent years with emergence of service robots in direct contact with humans. To cope with such problems, redundant actuation has been used to enhance the performance of a position/force controller. In this paper, the novel design of a double actuator unit (DAU) composed of double actuators and a planetary gear train is proposed to provide the capability of simultaneous control of position and force as well as the improved collision safety. Since one actuator controls position and the other actuator modulates stiffness, DAU can control the position and stiffness simultaneously at the same joint. The torque exerted on the joint can be estimated without an expensive torque/force sensor. DAU is capable of detecting dynamic collision by monitoring the speed of the stiffness modulator. Upon detection of dynamic collision, DAU immediately reduces its joint stiffness according to the collision magnitude, thus providing the optimum collision safety. It is shown from various experiments that DAU can provide good performance of position tracking, force estimation and collision safety.",
"title": ""
},
{
"docid": "166b16222ecc15048972e535dbf4cb38",
"text": "Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.",
"title": ""
},
{
"docid": "0de069da5fd8e5d36c399ef3da013320",
"text": "This paper explores the contrasting notions of \"permanance and disposability,\" \"the digital and the physical,\" and \"symbolism and function\" in the context of interaction design. Drawing from diverse streams of knowledge, we describe a novel design direction for enduring computational heirlooms based on the marriage of decentralized, trustless software and durable mobile hardware. To justify this concept, we review prior research; attempt to redefine the notion of \"material;\" propose blockchain-based software as a particular digital material to serve as a substrate for computational heirlooms; and argue for the use of mobile artifacts, informed in terms of their materials and formgiving practices by mechanical wristwatches, as its physical embodiment and functional counterpart. This integration is meant to enable mobile and ubiquitous interactive systems for the storing, experiencing, and exchanging value throughout multiple human lifetimes; showcasing the feats of computational sciences and crafts; and enabling novel user experiences.",
"title": ""
},
{
"docid": "b0a1cdf37eb1d78262ed663974a36793",
"text": "OBJECTIVE\nThe present study aimed at examining the time course and topography of oscillatory brain activity and event-related potentials (ERPs) in response to laterally presented affective pictures.\n\n\nMETHODS\nElectroencephalography was recorded from 129 electrodes in 10 healthy university students during presentation of pictures from the international affective picture system. Frequency measures and ERPs were obtained for pleasant, neutral, and unpleasant pictures.\n\n\nRESULTS\nIn accordance with previous reports, a modulation of the late positive ERP wave at parietal recording sites was found as a function of emotional arousal. Early mid gamma band activity (GBA; 30-45 Hz) at 80 ms post-stimulus was enhanced in response to aversive stimuli only, whereas the higher GBA (46-65 Hz) at 500 ms showed an enhancement of arousing, compared to neutral pictures. ERP and late gamma effects showed a pronounced right-hemisphere preponderance, but differed in terms of topographical distribution.\n\n\nCONCLUSIONS\nLate gamma activity may represent a correlate of widespread cortical networks processing different aspects of emotionally arousing visual objects. In contrast, differences between affective categories in early gamma activity might reflect fast detection of aversive stimulus features.",
"title": ""
},
{
"docid": "3c2684e27bfcceebb1ea093e60b18577",
"text": "Studies have explored the predictors of selfie-posting, but rarely investigated selfie-editing, a virtual makeover for online self-presentation. This study, based on social comparison theory, examined a psychological pathway from individual characteristics to selfie-editing behavior through social comparison. It was hypothesized that selfie-taking, public self-consciousness, social media use, and satisfaction with facial appearance would indirectly influence selfie-editing through social comparison of appearance (with friends or social media influencers/celebrities). A two-wave longitudinal online survey was conducted in South Korea among female smartphone users aged 20 to 39 (N 1⁄4 1064 at Wave 1 and 782 at Wave 2). The results revealed that frequent selfie-taking, higher levels of public self-consciousness, and more use of social media at Wave 1 were associated with social comparison with friends at Wave 1, which increased selfie-editing behavior at Wave 2. However, those three independent variables did not have indirect effects on selfie-editing at Wave 2 through social comparison with influencers/celebrities. Also, satisfaction with facial appearance had neither direct nor indirect effect on selfie-editing at Wave 2. The findings suggest that individuals engage in social comparison and resulting selfie-editing not because of their dissatisfaction with appearance, but because of the desire for more ideal online self-",
"title": ""
},
{
"docid": "415423f706491c5ec3df6a3b3bf48743",
"text": "The realm of human uniqueness steadily shrinks; reflecting this, other primates suffer from states closer to depression or anxiety than 'depressive-like' or 'anxiety-like behavior'. Nonetheless, there remain psychiatric domains unique to humans. Appreciating these continuities and discontinuities must inform the choice of neurobiological approach used in studying any animal model of psychiatric disorders. More fundamentally, the continuities reveal how aspects of psychiatric malaise run deeper than our species' history.",
"title": ""
},
{
"docid": "46a47931c51a3b5580580d27a9a6d132",
"text": "In airline service industry, it is difficult to collect data about customers' feedback by questionnaires, but Twitter provides a sound data source for them to do customer sentiment analysis. However, little research has been done in the domain of Twitter sentiment classification about airline services. In this paper, an ensemble sentiment classification strategy was applied based on Majority Vote principle of multiple classification methods, including Naive Bayes, SVM, Bayesian Network, C4.5 Decision Tree and Random Forest algorithms. In our experiments, six individual classification approaches, and the proposed ensemble approach were all trained and tested using the same dataset of 12864 tweets, in which 10 fold evaluation is used to validate the classifiers. The results show that the proposed ensemble approach outperforms these individual classifiers in this airline service Twitter dataset. Based on our observations, the ensemble approach could improve the overall accuracy in twitter sentiment classification for other services as well.",
"title": ""
},
{
"docid": "0512987d091d29681eb8ba38a1079cff",
"text": "Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. 2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.",
"title": ""
},
{
"docid": "56b2d8ffe74108d5b757c62eb7a7d31d",
"text": "Multi-label classification is an important machine learning task wherein one assigns a subset of candidate labels to an object. In this paper, we propose a new multi-label classification method based on Conditional Bernoulli Mixtures. Our proposed method has several attractive properties: it captures label dependencies; it reduces the multi-label problem to several standard binary and multi-class problems; it subsumes the classic independent binary prediction and power-set subset prediction methods as special cases; and it exhibits accuracy and/or computational complexity advantages over existing approaches. We demonstrate two implementations of our method using logistic regressions and gradient boosted trees, together with a simple training procedure based on Expectation Maximization. We further derive an efficient prediction procedure based on dynamic programming, thus avoiding the cost of examining an exponential number of potential label subsets. Experimental results show the effectiveness of the proposed method against competitive alternatives on benchmark datasets.",
"title": ""
},
{
"docid": "1d41e6f55521cdba4fc73febd09d2eb4",
"text": "1.",
"title": ""
},
{
"docid": "86de6e4d945f0d1fa7a0b699064d7bd5",
"text": "BACKGROUND\nTo increase understanding of the relationships among sexual violence, paraphilias, and mental illness, the authors assessed the legal and psychiatric features of 113 men convicted of sexual offenses.\n\n\nMETHOD\n113 consecutive male sex offenders referred from prison, jail, or probation to a residential treatment facility received structured clinical interviews for DSM-IV Axis I and II disorders, including sexual disorders. Participants' legal, sexual and physical abuse, and family psychiatric histories were also evaluated. We compared offenders with and without paraphilias.\n\n\nRESULTS\nParticipants displayed high rates of lifetime Axis I and Axis II disorders: 96 (85%) had a substance use disorder; 84 (74%), a paraphilia; 66 (58%), a mood disorder (40 [35%], a bipolar disorder and 27 [24%], a depressive disorder); 43 (38%), an impulse control disorder; 26 (23%), an anxiety disorder; 10 (9%), an eating disorder; and 63 (56%), antisocial personality disorder. Presence of a paraphilia correlated positively with the presence of any mood disorder (p <.001), major depression (p =.007), bipolar I disorder (p =.034), any anxiety disorder (p=.034), any impulse control disorder (p =.006), and avoidant personality disorder (p =.013). Although offenders without paraphilias spent more time in prison than those with paraphilias (p =.019), paraphilic offenders reported more victims (p =.014), started offending at a younger age (p =.015), and were more likely to perpetrate incest (p =.005). Paraphilic offenders were also more likely to be convicted of (p =.001) or admit to (p <.001) gross sexual imposition of a minor. Nonparaphilic offenders were more likely to have adult victims exclusively (p =.002), a prior conviction for theft (p <.001), and a history of juvenile offenses (p =.058).\n\n\nCONCLUSIONS\nSex offenders in the study population displayed high rates of mental illness, substance abuse, paraphilias, personality disorders, and comorbidity among these conditions. Sex offenders with paraphilias had significantly higher rates of certain types of mental illness and avoidant personality disorder. Moreover, paraphilic offenders spent less time in prison but started offending at a younger age and reported more victims and more non-rape sexual offenses against minors than offenders without paraphilias. On the basis of our findings, we assert that sex offenders should be carefully evaluated for the presence of mental illness and that sex offender management programs should have a capacity for psychiatric treatment.",
"title": ""
},
{
"docid": "7170a9d4943db078998e1844ad67ae9e",
"text": "Privacy has become increasingly important to the database community which is reflected by a noteworthy increase in research papers appearing in the literature. While researchers often assume that their definition of “privacy” is universally held by all readers, this is rarely the case; so many papers addressing key challenges in this domain have actually produced results that do not consider the same problem, even when using similar vocabularies. This paper provides an explicit definition of data privacy suitable for ongoing work in data repositories such as a DBMS or for data mining. The work contributes by briefly providing the larger context for the way privacy is defined legally and legislatively but primarily provides a taxonomy capable of thinking of data privacy technologically. We then demonstrate the taxonomy’s utility by illustrating how this perspective makes it possible to understand the important contribution made by researchers to the issue of privacy. The conclusion of this paper is that privacy is indeed multifaceted so no single current research effort adequately addresses the true breadth of the issues necessary to fully understand the scope of this important issue.",
"title": ""
},
{
"docid": "c7435dedf3733e3dd2285b1b04533b1c",
"text": "Deciding whether a claim is true or false often requires a deeper understanding of the evidence supporting and contradicting the claim. However, when presented with many evidence documents, users do not necessarily read and trust them uniformly. Psychologists and other researchers have shown that users tend to follow and agree with articles and sources that hold viewpoints similar to their own, a phenomenon known as confirmation bias. This suggests that when learning about a controversial topic, human biases and viewpoints about the topic may affect what is considered “trustworthy” or credible. It is an interesting challenge to build systems that can help users overcome this bias and help them decide the truthfulness of claims. In this article, we study various factors that enable humans to acquire additional information about controversial claims in an unbiased fashion. Specifically, we designed a user study to understand how presenting evidence with contrasting viewpoints and source expertise ratings affect how users learn from the evidence documents. We find that users do not seek contrasting viewpoints by themselves, but explicitly presenting contrasting evidence helps them get a well-rounded understanding of the topic. Furthermore, explicit knowledge of the credibility of the sources and the context in which the source provides the evidence document not only affects what users read but also whether they perceive the document to be credible. Introduction",
"title": ""
},
{
"docid": "f2f7b7152de3b83cc476e38eb6265fdf",
"text": "The discrimination of textures is a critical aspect of identi\"cation in digital imagery. Texture features generated by Gabor \"lters have been increasingly considered and applied to image analysis. Here, a comprehensive classi\"cation and segmentation comparison of di!erent techniques used to produce texture features using Gabor \"lters is presented. These techniques are based on existing implementations as well as new, innovative methods. The functional characterization of the \"lters as well as feature extraction based on the raw \"lter outputs are both considered. Overall, using the Gabor \"lter magnitude response given a frequency bandwidth and spacing of one octave and orientation bandwidth and spacing of 303 augmented by a measure of the texture complexity generated preferred results. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9327ab4f9eba9a32211ddb39463271b1",
"text": "We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with horizon graphs - a space-efficient time series visualization technique - across a range of chart sizes, measuring the speed and accuracy of subjects' estimates of value differences between charts. We identify transition points at which reducing the chart height results in significantly differing drops in estimation accuracy across the compared chart types, and we find optimal positions in the speed-accuracy tradeoff curve at which viewers performed quickly without attendant drops in accuracy. Based on these results, we propose approaches for increasing data density that optimize graphical perception.",
"title": ""
},
{
"docid": "0f59dd09af90b911688d584292e262ed",
"text": "This article is on defining and measuring of organizational culture and its impact on the organizational performance, through an analysis of existing empirical studies and models link with the organizational culture and performance. The objective of this article is to demonstrate conceptualization, measurement and examine various concepts on organization culture and performance. After analysis of wide literature, it is found that organizational culture has deep impact on the variety of organizations process, employees and its performance. This also describes the different dimensions of the culture. Research shows that if employee are committed and having the same norms and value as per organizations have, can increase the performance toward achieving the overall organization goals. Balance Scorecard is suggested tool to measure the performance in the performance management system. More research can be done in this area to understand the nature and ability of the culture in manipulating performance of the organization. Managers and leaders are recommended to develop the strong culture in the organization to improve the overall performance of the employees and organization.",
"title": ""
},
{
"docid": "db8c9f9ba2c0bfca3a3172c915c86c1f",
"text": "In this brief, the output reachable estimation and safety verification problems for multilayer perceptron (MLP) neural networks are addressed. First, a conception called maximum sensitivity is introduced, and for a class of MLPs whose activation functions are monotonic functions, the maximum sensitivity can be computed via solving convex optimization problems. Then, using a simulation-based method, the output reachable set estimation problem for neural networks is formulated into a chain of optimization problems. Finally, an automated safety verification is developed based on the output reachable set estimation result. An application to the safety verification for a robotic arm model with two joints is presented to show the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "a671673f330bd2b1ec14aaca9f75981a",
"text": "The aim of this study was to contrast the validity of two opposing explanatory hypotheses about the effect of online communication on adolescents' well-being. The displacement hypothesis predicts that online communication reduces adolescents' well-being because it displaces time spent with existing friends, thereby reducing the quality of these friendships. In contrast, the stimulation hypothesis states that online communication stimulates well-being via its positive effect on time spent with existing friends and the quality of these friendships. We conducted an online survey among 1,210 Dutch teenagers between 10 and 17 years of age. Using mediation analyses, we found support for the stimulation hypothesis but not for the displacement hypothesis. We also found a moderating effect of type of online communication on adolescents' well-being: Instant messaging, which was mostly used to communicate with existing friends, positively predicted well-being via the mediating variables (a) time spent with existing friends and (b) the quality of these friendships. Chat in a public chatroom, which was relatively often used to talk with strangers, had no effect on adolescents' wellbeing via the mediating variables.",
"title": ""
}
] |
scidocsrr
|
f95981f4b21f23992edf86f912ac5cc4
|
Getting a Job via Career-Oriented Social Networking Sites: The Weakness of Ties
|
[
{
"docid": "6adbe9f2de5a070cf9c1b7f708f4a452",
"text": "Prior research has provided valuable insights into how and why employees make a decision about the adoption and use of information technologies (ITs) in the workplace. From an organizational point of view, however, the more important issue is how managers make informed decisions about interventions that can lead to greater acceptance and effective utilization of IT. There is limited research in the IT implementation literature that deals with the role of interventions to aid such managerial decision making. Particularly, there is a need to understand how various interventions can influence the known determinants of IT adoption and use. To address this gap in the literature, we draw from the vast body of research on the technology acceptance model (TAM), particularly the work on the determinants of perceived usefulness and perceived ease of use, and: (i) develop a comprehensive nomological network (integrated model) of the determinants of individual level (IT) adoption and use; (ii) empirically test the proposed integrated model; and (iii) present a research agenda focused on potential preand postimplementation interventions that can enhance employees’ adoption and use of IT. Our findings and research agenda have important implications for managerial decision making on IT implementation in organizations. Subject Areas: Design Characteristics, Interventions, Management Support, Organizational Support, Peer Support, Technology Acceptance Model (TAM), Technology Adoption, Training, User Acceptance, User Involvement, and User Participation.",
"title": ""
},
{
"docid": "e054c2d3b52441eaf801e7d2dd54dce9",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "7c6db42ba77e2d2b2e33df40f59844e3",
"text": "Timely and accurate processing of crisis information and effective communication have been documented as critical elements of disaster relief operations. Despite the lessons learned from previous crises, preparing information for humanitarian assistance and ineffective information sharing remain a recurring and almost impossible task for relief agencies. The purpose of this paper is to propose a template-based methodology to archive past disaster relief operations, create “descriptive” templates for advanced preparedness, and design “normative” templates for fast execution of assistance operations, while reducing miscommunications among aid agencies.",
"title": ""
},
{
"docid": "e0d0a0f59f5a894c3674b903c5b7b14c",
"text": "Automated Information Systems has played a major role in the growth, advancement, and modernization of our daily work processes. The main purpose of this paper is to develop a safe and secure web based attendance monitoring system using Biometrics and Radio Frequency Identification (RFID) Technology based on multi-tier architecture, for both computers and smartphones. The system can maintain the attendance records of both students and teachers/staff members of an institution. The system can also detect the current location of the students, faculties, and other staff members anywhere within the domain of institution campus. With the help of android application one can receive live feeds of various campus activities, keep updated with the current topics in his/her enrolled courses as well as track his/her friends on a real time basis. An automated SMS service is facilitated in the system, which sends an SMS automatically to the parents in order to notify that their ward has successfully reached the college. Parents as well as student will be notified via e-mail, if the student is lagging behind in attendance. There is a functionality of automatic attendance performance graph in the system, which gives an idea of the student's consistency in attendance throughout the semester.",
"title": ""
},
{
"docid": "2e2ee64b0e2d18fff783d67fade3f9b3",
"text": "This paper discusses some aspects of selecting and testing random and pseudorandom number generators. The outputs of such generators may be used in many cryptographic apphcations, such as the generation of key material. Generators suitable for use in cryptographic applications may need to meet stronger requirements than for other applications. In particular, their outputs must be unpredictable in the absence of knowledge of the inputs. Some criteria for characterizing and selecting appropriate generators are discussed in this document. The subject of statistical testing and its relation to cryptanalysis is also discussed, and some recommended statistical tests are provided. These tests may be useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. However, no set of statistical tests can absolutely certify a generator as appropriate for usage in a particular application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The design and cryptanalysis of generators is outside the scope of this paper.",
"title": ""
},
{
"docid": "d5201bbe0f0de8008913cf2a16917036",
"text": "Mobile learning provides unique learning experiences for learners in both formal and informal environments, supporting various pedagogies with the unique characteristics that are afforded by mobile technology. Mobile learning, as a growing topic of interest, brings challenges of design for teachers and course designers alike. Current research on mobile learning has covered various aspects such as personalization, context sensitivity, ubiquity and pedagogy. Existing theories and findings are valuable to the understanding of mobile learning, however they are fragmented and separate, and need to be understood within the broader mobile learning paradigm. This paper unifies existing theories into a method for mobile learning design that can be generalized across mobile learning applications. This method develops from a strategy – seeking objectives, identifying the approaches to learning and the context in which the course will exist, to guide the content, delivery and structure of the course towards a successful implementation that is evaluated against the initial objectives set out.",
"title": ""
},
{
"docid": "05ba530d5f07e141d18c3f9b92a6280d",
"text": "In this paper, we introduce autoencoder ensembles for unsupervised outlier detection. One problem with neural networks is that they are sensitive to noise and often require large data sets to work robustly, while increasing data size makes them slow. As a result, there are only a few existing works in the literature on the use of neural networks in outlier detection. This paper shows that neural networks can be a very competitive technique to other existing methods. The basic idea is to randomly vary on the connectivity architecture of the autoencoder to obtain significantly better performance. Furthermore, we combine this technique with an adaptive sampling method to make our approach more efficient and effective. Experimental results comparing the proposed approach with state-of-theart detectors are presented on several benchmark data sets showing the accuracy of our approach.",
"title": ""
},
{
"docid": "967b74eee520a4259ea318310662ebd1",
"text": "A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC.",
"title": ""
},
{
"docid": "80fe4fa6ea9312665d2b576bb18c416f",
"text": "As location-based social networks (LBSNs) rapidly grow, it is a timely topic to study how to recommend users with interesting locations, known as points-of-interest (POIs). Most existing POI recommendation techniques only employ the check-in data of users in LBSNs to learn their preferences on POIs by assuming a user's check-in frequency to a POI explicitly reflects the level of her preference on the POI. However, in reality users usually visit POIs only once, so the users' check-ins may not be sufficient to derive their preferences using their check-in frequencies only. Actually, the preferences of users are exactly implied in their opinions in text-based tips commenting on POIs. In this paper, we propose an opinion-based POI recommendation framework called ORec to take full advantage of the user opinions on POIs expressed as tips. In ORec, there are two main challenges: (i) detecting the polarities of tips (positive, neutral or negative), and (ii) integrating them with check-in data including social links between users and geographical information of POIs. To address these two challenges, (1) we develop a supervised aspect-dependent approach to detect the polarity of a tip, and (2) we devise a method to fuse tip polarities with social links and geographical information into a unified POI recommendation framework. Finally, we conduct a comprehensive performance evaluation for ORec using two large-scale real data sets collected from Foursquare and Yelp. Experimental results show that ORec achieves significantly superior polarity detection and POI recommendation accuracy compared to other state-of-the-art polarity detection and POI recommendation techniques.",
"title": ""
},
{
"docid": "09bfd65053c41aae476ddda960e5fc0d",
"text": "With the proliferation of portable and mobile IoT devices and their increasing processing capability, we witness that the edge of network is moving to the IoT gateways and smart devices. To avoid Big Data issues (e.g. high latency of cloud based IoT), the processing of the captured data is starting from the IoT edge node. However, the available processing capabilities and energy resources are still limited and do not allow to fully process the data on-board. It calls for offloading some portions of computation to the gateway or servers. Due to the limited bandwidth of the IoT gateways, choosing the offloading levels of connected devices and allocating bandwidth to them is a challenging problem. This paper proposes a technique for managing computation offloading in a local IoT network under bandwidth constraints. The existing bandwidth allocation and computation offloading management techniques underutilize the gateway's resources (e.g. bandwidth) due to the fragmentation issue. This issue stems from the discrete coarse-grained choices (i.e. offloading levels) on the IoT end nodes. Our proposed technique addresses this issue, and utilizes the available resources of the gateway effectively. The experimental results show on average 1 hour (up to 1.5 hour) improvement in battery life of edge devices. The utilization of gateway's bandwidth increased by 40%.",
"title": ""
},
{
"docid": "e1a2dc853f96f5b01fe89e5462bdcb52",
"text": "Natural language generation from visual inputs has attracted extensive research attention recently. Generating poetry from visual content is an interesting but very challenging task. We propose and address the new multimedia task of generating classical Chinese poetry from image streams. In this paper, we propose an Images2Poem model with a selection mechanism and an adaptive self-attention mechanism for the problem. The model first selects representative images to summarize the image stream. During decoding, it adaptively pays attention to the information from either source-side image stream or target-side previously generated characters. It jointly summarizes the images and generates relevant, high-quality poetry from image streams. Experimental results demonstrate the effectiveness of the proposed approach. Our model outperforms baselines in different human evaluation metrics.",
"title": ""
},
{
"docid": "281f23c51d3ba27e09e3109c8578c385",
"text": "Generative Adversarial Networks (GANs) are an incredibly exciting approach for efficiently training computers to learn many features in data, as well as to generate realistic novel samples. Thanks to a number of their unique characteristics, some experts believe they may reinvent machine learning. In this thesis I explore the state of the GAN, focusing on the mechanisms by which they work, the fundamental challenges and strategies associated with training them, a selection of their various extensions, and what they may have to offer to the the greater machine learning community. I also consider the broader idea of building machine learning systems comprised of multiple neural networks, as opposed to using a single network. Using the state of the art progressive growing of GANs approach, I conducted experiments where I generated painting-like images that I believe to be the most authentic GAN-generated portrait paintings. I also generated highly realistic chest X-ray images, using a progressively grown GAN trained without labels on the NIH’s ChestX-ray14 dataset, which contains 112,000 chest X-ray images with 14 different disease diagnoses represented; it still remains to be seen whether the GAN-generated X-ray images contain clear identifying features of the various diseases. My generated results further demonstrate the relatively stable training of the progressive growing approach as well as the GAN’s compelling capacity for learning features in a variety of forms of image data.",
"title": ""
},
{
"docid": "75bd4eca2d60dfbe7426914b178cd76a",
"text": "While precision and recall have served the information extraction community well as two separate measures of system performance, we show that the F -measure, the weighted harmonic mean of precision and recall, exhibits certain undesirable behaviors. To overcome these limitations, we define an error measure, the slot error rate, which combines the different types of error directly, without having to resort to precision and recall as preliminary measures. The slot error rate is analogous to the word error rate that is used for measuring speech recognition performance; it is intended to be a measure of the cost to the user for the system to make the different types of errors.",
"title": ""
},
{
"docid": "793edca657c68ade4d2391c23f585c41",
"text": "In the linear bandit problem a learning agent chooses an arm at each round and receives a stochastic reward. The expected value of this stochastic reward is an unknown linear function of the arm choice. As is standard in bandit problems, a learning agent seeks to maximize the cumulative reward over an n round horizon. The stochastic bandit problem can be seen as a special case of the linear bandit problem when the set of available arms at each round is the standard basis ei for the Euclidean space R, i.e. the vector ei is a vector with all 0s except for a 1 in the ith coordinate. As a result each arm is independent of the others and the reward associated with each arm depends only on a single parameter as is the case in stochastic bandits. The underlying algorithmic approach to solve this problem uses the optimism in the face of uncertainty (OFU) principle. The OFU principle solves the exploration-exploitation tradeoff in the linear bandit problem by maintaining a confidence set for the vector of coefficients of the linear function that governs rewards. In each round the algorithm chooses an estimate of the coefficients of the linear function from the confidence set and then takes an action so that the predicted reward is maximized. The problem reduces to constructing confidence sets for the vector of coefficients of the linear function based on the action-reward pairs observed in the past time steps. The linear bandit problem was first studied by Auer et al. (2002) [1] under the name of linear reinforcement learning. Since the introduction of the problem, several works have improved the analysis and explored variants of the problem. The most influential works include Dani et al. (2008) [2], Rusmevichientong et al. (2010) [3], and Abbasi et al. (2011) [4]. In each of these works the set of available arms remains constant, but the set is only restricted to being a bounded subset of a finite-dimensional vector space. Variants of the problem formulation have also been widely applied to recommendation systems following the work of Li et al. (2010) [5] within the context of web advertisement. An important property of this problem is that the arms are not independent because future arm choices depend on the confidence sets constructed from past choices. In the literature, several works including [5] have failed to recognize this property leading to faulty analysis. This fine detail requires special care which we explore in depth in Section 2.",
"title": ""
},
{
"docid": "4d5e72046bfd44b9dc06dfd02812f2d6",
"text": "Recommender systems in the last decade opened new interactive channels between buyers and sellers leading to new concepts involved in the marketing strategies and remarkable positive gains in online sales. Businesses intensively aim to maintain customer loyalty, satisfaction and retention; such strategic longterm values need to be addressed by recommender systems in a more tangible and deeper manner. The reason behind the considerable growth of recommender systems is for tracking and analyzing the buyer behavior on the one to one basis to present items on the web that meet his preference, which is the core concept of personalization. Personalization is always related to the relationship between item and user leaving out the contextual information about this relationship. User's buying decision is not only affected by the presented item, but also influenced by its price and the context in which the item is presented, such as time or place. Recently, new system has been designed based on the concept of utilizing price personalization in the recommendation process. This system is newly coined as personalized pricing recommender system (PPRS). We propose personalized pricing recommender system with a novel approach of calculating consumer online real value to determine dynamically his personalized discount, which can be generically applied on the normal price of any recommend item through its predefined discount rules.",
"title": ""
},
{
"docid": "4320278dcbf0446daf3d919c21606208",
"text": "The operation of different brain systems involved in different types of memory is described. One is a system in the primate orbitofrontal cortex and amygdala involved in representing rewards and punishers, and in learning stimulus-reinforcer associations. This system is involved in emotion and motivation. A second system in the temporal cortical visual areas is involved in learning invariant representations of objects. A third system in the hippocampus is implicated in episodic memory and in spatial function. Fourth, brain systems in the frontal and temporal cortices involved in short term memory are described. The approach taken provides insight into the neuronal operations that take place in each of these brain systems, and has the aim of leading to quantitative biologically plausible neuronal network models of how each of these memory systems actually operates.",
"title": ""
},
{
"docid": "dc23db7027a8abd982ce2532601ded72",
"text": "This paper presents TiNA, a scheme for minimizing energy consumption in sensor networks by exploiting end-user tolerance to temporal coherency. TiNA utilizes temporal coherency tolerances to both reduce the amount of information transmitted by individual nodes (communication cost dominates power usage in sensor networks), and to improve quality of data when not all sensor readings can be propagated up the network within a given time constraint. TiNA was evaluated against a traditional in-network aggregation scheme with respect to power savings as well as the quality of data for aggregate queries. Preliminary results show that TiNA can reduce power consumption by up to 50% without any loss in the quality of data.",
"title": ""
},
{
"docid": "d8c40ed2d2b2970412cc8404576d0c80",
"text": "In this paper an adaptive control technique combined with the so-called IDA-PBC (Interconnexion Damping Assignment, Passivity Based Control) controller is proposed for the stabilization of a class of underactuated mechanical systems, namely, the Inertia Wheel Inverted Pendulum (IWIP). It has two degrees of freedom with one actuator. The IDA-PBC stabilizes for all initial conditions (except a set of zeros measure) the upward position of the IWIP. The efficiency of this controller depends on the tuning of several gains. Motivated by this issue we propose to automatically adapt some of these gains in order to regain performance rapidly. The effectiveness of the proposed adaptive scheme is demonstrated through numerical simulations and experimental results.",
"title": ""
},
{
"docid": "fcd4971aea410465b3e7650454e8f36d",
"text": "To UNDERSTAND VISION in physiological terms represents a formidable problem for the biologist. I t am0 unts to learning how the nervous system handles incoming messages so that form, color, movement, and depth can be perceived and interpreted. One approach, perhaps the most direct, is to stimulate the retina with patterns of light while recording from single cells or fibers at various points along the visual pa thway. For each cell the optimum stimulus can be determined, and one can note the charac teristics common to cells at the next. each level in the visual pathway, and compare a given level with",
"title": ""
},
{
"docid": "fc904f979f7b00941852ac9db66f7129",
"text": "The Orchidaceae are one of the most species-rich plant families and their floral diversity and pollination biology have long intrigued evolutionary biologists. About one-third of the estimated 18,500 species are thought to be pollinated by deceit. To date, the focus has been on how such pollination evolved, how the different types of deception work, and how it is maintained, but little progress has been made in understanding its evolutionary consequences. To address this issue, we discuss here how deception affects orchid mating systems, the evolution of reproductive isolation, speciation processes and neutral genetic divergence among species. We argue that pollination by deceit is one of the keys to orchid floral and species diversity. A better understanding of its evolutionary consequences could help evolutionary biologists to unravel the reasons for the evolutionary success of orchids.",
"title": ""
},
{
"docid": "a1908a924387aa92addb85a22790e0d1",
"text": "This paper describes the second edition of the shared task on Taxonomy Extraction Evaluation organised as part of SemEval 2016. This task aims to extract hypernym-hyponym relations between a given list of domain-specific terms and then to construct a domain taxonomy based on them. TExEval-2 introduced a multilingual setting for this task, covering four different languages including English, Dutch, Italian and French from domains as diverse as environment, food and science. A total of 62 runs submitted by 5 different teams were evaluated using structural measures, by comparison with gold standard taxonomies and by manual quality assessment of novel relations.",
"title": ""
},
{
"docid": "4d51e2a6f1ddfb15753117b0f22e0fad",
"text": "We describe distributed algorithms for two widely-used topic models, namely the Latent Dirichlet Allocation (LDA) model, and the Hierarchical Dirichet Process (HDP) model. In our distributed algorithms the data is partitioned across separate processors and inference is done in a parallel, distributed fashion. We propose two distributed algorithms for LDA. The first algorithm is a straightforward mapping of LDA to a distributed processor setting. In this algorithm processors concurrently perform Gibbs sampling over local data followed by a global update of topic counts. The algorithm is simple to implement and can be viewed as an approximation to Gibbs-sampled LDA. The second version is a model that uses a hierarchical Bayesian extension of LDA to directly account for distributed data. This model has a theoretical guarantee of convergence but is more complex to implement than the first algorithm. Our distributed algorithm for HDP takes the straightforward mapping approach, and merges newly-created topics either by matching or by topic-id. Using five real-world text corpora we show that distributed learning works well in practice. For both LDA and HDP, we show that the converged test-data log probability for distributed learning is indistinguishable from that obtained with single-processor learning. Our extensive experimental results include learning topic models for two multi-million document collections using a 1024-processor parallel computer.",
"title": ""
}
] |
scidocsrr
|
73c67c03c0e1225160430d5d722254dd
|
Artificial Intelligence and Natural Language
|
[
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
}
] |
[
{
"docid": "3120b862a5957b0deeec5345376b74d0",
"text": "This paper deals with automatic cartoon colorization. This is a hard issue, since it is an ill-posed problem that usually requires user intervention to achieve high quality. Motivated by the recent successes in natural image colorization based on deep learning techniques, we investigate the colorization problem at the cartoon domain using Convolutional Neural Network. To our best knowledge, no existing papers or research studies address this problem using deep learning techniques. Here we investigate a deep Convolutional Neural Network based automatic color filling method for cartoons.",
"title": ""
},
{
"docid": "309dee96492cf45ed2887701b27ad3ee",
"text": "The objective of a systematic review is to obtain empirical evidence about the topic under review and to allow moving forward the body of knowledge of a discipline. Therefore, systematic reviewing is a tool we can apply in Software Engineering to develop well founded guidelines with the final goal of improving the quality of the software systems. However, we still do not have as much experience in performing systematic reviews as in other disciplines like medicine, and therefore we need detailed guidance. This paper presents a proposal of a improved process to perform systematic reviews in software engineering. This process is the result of the tasks carried out in a first review and a subsequent update concerning the effectiveness of elicitation techniques.",
"title": ""
},
{
"docid": "0ee27f9045935db4241e9427bed2af59",
"text": "As a new generation of deep-sea Autonomous Underwater Vehicle (AUV), Qianlong I is a 6000m rated glass deep-sea manganese nodules detection AUV which based on the CR01 and the CR02 deep-sea AUVs and developed by Shenyang Institute of Automation, the Chinese Academy of Sciences from 2010. The Qianlong I was tested in the thousand-isles lake in Zhejiang Province of China during November 2012 to March 2013 and the sea trials were conducted in the South China Sea during April 20-May 2, 2013 after the lake tests and the ocean application completed in October 2013. This paper describes two key problems encountered in the process of developing Qianlong I, including the launch and recovery systems development and variable buoyancy system development. Results from the recent lake and sea trails are presented, and future missions and development plans are discussed.",
"title": ""
},
{
"docid": "4f66bf9da23c0beb562dfaeb3af18d93",
"text": "Cloud computing concept has been envisioned as architecture of the next generation for Information Technology (IT) enterprise. The Cloud computing idea offers with dynamic scalable resources provisioned as examine on the Internet. It allows access to remote computing services and users only have to pay for what they want to use, when they want to use it. But the security of the information which is stored in the cloud is the major issue for a cloud user. Cloud computing has been flourishing in past years because of its ability to provide users with on-demand, flexible, reliable, and low-cost services. With more and more cloud applications being available, data security becomes an important issue to the cloud. In order to make sure security of the information at cloud data storage end, a design and implementation of an algorithm to enhance cloud security is proposed. With a concept, where the proposed algorithm (PA) combines features of two other existing algorithms named Ceaser cipher and Attribute based cryptography (ABC). In this research work, text information are encrypting using "Caesar Cipher" then produced cipher text again encrypted by using proposed algorithm (PA) with the help of private key of 128 bits. And in the last step of encryption process, based on ABC, attribute related to cipher text is stored along with cipher text",
"title": ""
},
{
"docid": "094dbd57522cb7b9b134b14852bea78b",
"text": "When encountering qualitative research for the first time, one is confronted with both the number of methods and the difficulty of collecting, analysing and presenting large amounts of data. In quantitative research, it is possible to make a clear distinction between gathering and analysing data. However, this distinction is not clear-cut in qualitative research. The objective of this paper is to provide insight for the novice researcher and the experienced researcher coming to grounded theory for the first time. For those who already have experience in the use of the method the paper provides further much needed discussion arising out of デエW マWデエラSげゲ ;Sラヮデキラミ キミ デエW I“ aキWノSく In this paper the authors present a practical application and illustrate how grounded theory method was applied to an interpretive case study research. The paper discusses grounded theory method and provides guidance for the use of the method in interpretive studies.",
"title": ""
},
{
"docid": "95f5a6da082e3c835301f7655e9826be",
"text": "Information and communication technologies (ICT) have become commonplace entities in all aspects of life. Across the past twenty years the use of ICT has fundamentally changed the practices and procedures of nearly all forms of endeavour within business and governance. Within education, ICT has begun to have a presence but the value of ICT is not affordable. Unfortunately, there are some limitations confronting institutions in Nigeria from infusing ICT. The basic principle of cloud computing entails the reduction of in-house data centres and the delegation of a portion or all of the Information Technology infrastructure capability to a third party. This holds the promise of driving down cost while fostering innovation and promoting agility. Institutions of higher learning, such as Universities and Colleges, are the core of innovation through their advanced research and development. Subsequently, Higher Institutions may benefit greatly by harnessing the power of cloud computing, including cost cutting as well as all the above types of cloud services. This paper explores the application of cloud computing in higher education in Nigeria, issues with ICT in Nigeria and touches upon some aspired benefits as well as expected limitations of cloud computing.",
"title": ""
},
{
"docid": "27237bf03da7f6aea13c137668def5f0",
"text": "In deep learning community, gradient based methods are typically employed to train the proposed models. These methods generally operate in a mini-batch training manner wherein a small fraction of the training data is invoked to compute an approximative gradient. It is reported that models trained with large batch are prone to generalize worse than those trained with small batch. Several inspiring works are conducted to figure out the underlying reason of this phenomenon, but almost all of them focus on classification tasks. In this paper, we investigate the influence of batch size on regression task. More specifically, we tested the generalizability of deep auto-encoder trained with varying batch size and checked some well-known measures relating to model generalization. Our experimental results lead to three conclusions. First, there exist no obvious generalization gap in regression model such as auto-encoders. Second, with a same train loss as target, small batch generally lead to solutions closer to the starting point than large batch. Third, spectral norm of weight matrices is closely related to generalizability of the model, but different layers contribute variously to the generalization performance.",
"title": ""
},
{
"docid": "8e6ceaadcad931afcf9b9f2f17deb4fb",
"text": "We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively.1",
"title": ""
},
{
"docid": "612416cb82559f94d8d4b888bad17ba1",
"text": "Future plastic materials will be very different from those that are used today. The increasing importance of sustainability promotes the development of bio-based and biodegradable polymers, sometimes misleadingly referred to as 'bioplastics'. Because both terms imply \"green\" sources and \"clean\" removal, this paper aims at critically discussing the sometimes-conflicting terminology as well as renewable sources with a special focus on the degradation of these polymers in natural environments. With regard to the former we review innovations in feedstock development (e.g. microalgae and food wastes). In terms of the latter, we highlight the effects that polymer structure, additives, and environmental variables have on plastic biodegradability. We argue that the 'biodegradable' end-product does not necessarily degrade once emitted to the environment because chemical additives used to make them fit for purpose will increase the longevity. In the future, this trend may continue as the plastics industry also is expected to be a major user of nanocomposites. Overall, there is a need to assess the performance of polymer innovations in terms of their biodegradability especially under realistic waste management and environmental conditions, to avoid the unwanted release of plastic degradation products in receiving environments.",
"title": ""
},
{
"docid": "00ed940459b92d92981e4132a2b5e9c0",
"text": "Variants of Hirschsprung disease are conditions that clinically resemble Hirschsprung disease, despite the presence of ganglion cells in rectal suction biopsies. The characterization and differentiation of various entities are mainly based on histologic, immunohistochemical, and electron microscopy findings of biopsies from patients with functional intestinal obstruction. Intestinal neuronal dysplasia is histologically characterized by hyperganglionosis, giant ganglia, and ectopic ganglion cells. In most intestinal neuronal dysplasia cases, conservative treatments such as laxatives and enema are sufficient. Some patients may require internal sphincter myectomy. Patients with the diagnosis of isolated hypoganglionosis show decreased numbers of nerve cells, decreased plexus area, as well as increased distance between ganglia in rectal biopsies, and resection of the affected segment has been the treatment of choice. The diagnosis of internal anal sphincter achalasia is based on abnormal rectal manometry findings, whereas rectal suction biopsies display presence of ganglion cells as well as normal acetylcholinesterase activity. Internal anal sphincter achalasia is either treated by internal sphincter myectomy or botulinum toxin injection. Megacystis microcolon intestinal hypoperistalsis is a rare condition, and the most severe form of functional intestinal obstruction in the newborn. Megacystis microcolon intestinal hypoperistalsis is characterized by massive abdominal distension caused by a largely dilated nonobstructed bladder, microcolon, and decreased or absent intestinal peristalsis. Although the outcome has improved in recent years, survivors have to be either maintained by total parenteral nutrition or have undergone multivisceral transplant. This review article summarizes the current knowledge of the aforementioned entities of variant HD.",
"title": ""
},
{
"docid": "6cce71160f5aa336ccda512d0d9928fc",
"text": "We present a data management platform in the cloud, CloudDB. The guiding principle of CloudDB’s design is establishing data independence for the applications that need to use diverse underlying data stores that are optimized for varying workload needs and characteristics. The applications should not have to be aware of the physical organization of the data and how the data is accessed. Ideally, an application only needs a logical specification of the data access layer and the data access requests are handled in a declarative way. CloudDB hosts variety of specialized databases that deliver high performance, scalability, and cost efficiency for varying application needs. CloudDB’s API layer is designed in such a way to give data independence to the higher level applications. The goal is to let the clients use just a simple, standard, and uniform language API to access data management functions as a service.",
"title": ""
},
{
"docid": "e1a0660684ac0552d596402bcda8d40a",
"text": "AIM\nTo examine those sources of information which nurses find useful for reducing the uncertainty associated with their clinical decisions.\n\n\nBACKGROUND\nNursing research has concentrated almost exclusively on the concept of research implementation. Few, if any, papers examine the use of research knowledge in the context of clinical decision-making. There is a need to establish how useful nurses perceive information sources are, for reducing the uncertainties they face when making clinical decisions.\n\n\nDESIGN\nCross-case analysis involving qualitative interviews, observation, documentary audit and Q methodological modelling of shared subjectivities amongst nurses. The case sites were three large acute hospitals in the north of England, United Kingdom. One hundred and eight nurses were interviewed, 61 of whom were also observed for a total of 180 hours and 122 nurses were involved in the Q modelling exercise.\n\n\nRESULTS\nText-based and electronic sources of research-based information yielded only small amounts of utility for practising clinicians. Despite isolating four significantly different perspectives on what sources were useful for clinical decision-making, it was human sources of information for practice that were overwhelmingly perceived as the most useful in reducing the clinical uncertainties of nurse decision-makers.\n\n\nCONCLUSIONS\nIt is not research knowledge per se that carries little weight in the clinical decisions of nurses, but rather the medium through which it is delivered. Specifically, text-based and electronic resources are not viewed as useful by nurses engaged in making decisions in real time, in real practice, but those individuals who represent a trusted and clinically credible source are. More research needs to be carried out on the qualities of people regarded as clinically important information agents (specifically, those in clinical nurse specialist and associated roles) whose messages for practice appear so useful for clinicians.",
"title": ""
},
{
"docid": "3c1db6405945425c61495dd578afd83f",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "6f9f95f29a2fb1069ce924f733947d7d",
"text": "While human action recognition from still images finds wide applications in computer vision, it remains a very challenging problem. Compared with videobased ones, image-based action representation and recognition are impossible to access the motion cues of action, which largely increases the difficulties in dealing with pose variances and cluttered backgrounds. Motivated by the recent success of convolutional neural networks (CNN) in learning discriminative features from objects in the presence of variations and backgrounds, in this paper, we investigate the potentials of CNN in image-based action recognition. A new action recognition method is proposed by implicitly integrating pose hints into the CNN framework, i.e., we use a CNN originally learned for object recognition as a base network and then transfer it to action recognition by training the base network jointly with inference of poses. Such a joint training scheme can guide the network towards pose inference and meanwhile prevent the unrelated knowledge inherited from the base network. For further performance improvement, the training data is augmented by enriching the pose-related samples. The experimental results on three benchmark datasets have demonstrated the effectiveness of our method.",
"title": ""
},
{
"docid": "6549a00df9fadd56b611ee9210102fe8",
"text": "Ontology editors are software tools that allow the creation and maintenance of ontologies through a graphical user interface. As the Semantic Web effort grows, a larger community of users for this kind of tools is expected. New users include people not specifically skilled in the use of ontology formalisms. In consequence, the usability of ontology editors can be viewed as a key adoption precondition for Semantic Web technologies. In this paper, the usability evaluation of several representative ontology editors is described. This evaluation is carried out by combining a heuristic pre-assessment and a subsequent user-testing phase. The target population comprises people with no specific ontology-creation skills that have a general knowledge about domain modelling. The problems found point out that, for this kind of users, current editors are adequate for the creation and maintenance of simple ontologies, but also that there is room for improvement, especially in browsing mechanisms, help systems and visualization metaphors.",
"title": ""
},
{
"docid": "8be921cfab4586b6a19262da9a1637de",
"text": "Automatic segmentation of microscopy images is an important task in medical image processing and analysis. Nucleus detection is an important example of this task. Mask-RCNN is a recently proposed state-of-the-art algorithm for object detection, object localization, and object instance segmentation of natural images. In this paper we demonstrate that Mask-RCNN can be used to perform highly effective and efficient automatic segmentations of a wide range of microscopy images of cell nuclei, for a variety of cells acquired under a variety of conditions.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "cf6c6676844ae0068527b03fe419ed82",
"text": "We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the classical penalized ERM procedure. We apply this method for quantile forecasting of the French GDP. Under additional conditions on the loss functions (satisfied by the quadratic loss function) and for uniformly mixing processes, we prove that the Gibbs estimator actually achieves fast rates of convergence d/n. We discuss the optimality of these different rates pointing out references to lower bounds when they are available. In particular, these results bring a generalization the results of [29] on sparse regression estimation to some autoregression.",
"title": ""
},
{
"docid": "5c17c39bcf3a940950c321eaeabcd1d4",
"text": "Data indicate that large percentages of the general public regard psychology’s scientific status with considerable skepticism. I examine 6 criticisms commonly directed at the scientific basis of psychology (e.g., psychology is merely common sense, psychology does not use scientific methods, psychology is not useful to society) and offer 6 rebuttals. I then address 8 potential sources of public skepticism toward psychology and argue that although some of these sources reflect cognitive errors (e.g., hindsight bias) or misunderstandings of psychological science (e.g., failure to distinguish basic from applied research), others (e.g., psychology’s failure to police itself, psychology’s problematic public face) reflect the failure of professional psychology to get its own house in order. I offer several individual and institutional recommendations for enhancing psychology’s image and contend that public skepticism toward psychology may, paradoxically, be one of our field’s strongest allies.",
"title": ""
}
] |
scidocsrr
|
50c5f23e29217b36ed42180853b3314e
|
The factors influencing members' continuance intentions in professional virtual communities - a longitudinal study
|
[
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
},
{
"docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb",
"text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.",
"title": ""
}
] |
[
{
"docid": "3b47a88f37a06ec44d510a4dbfc0993d",
"text": "Governance, Risk and Compliance (GRC) as an integrated concept has gained great interest recently among researchers in the Information Systems (IS) field. The need for more effective and efficient business processes in the area of financial controls drives enterprises to successfully implement GRC systems as an overall goal when they are striving for enterprise value of their integrated systems. The GRC implementation process is a significant parameter influencing the success of operational performance and financial governance and supports the practices for competitive advantage within the organisations. However, GRC literature is limited regarding the analysis of their implementation and adoption success. Therefore, there is a need for further research and contribution in the area of GRC systems and more specifically their implementation process. The research at hand recognizes GRC as a fundamental business requirement and focuses on the need to analyse the implementation process of such enterprise solutions. The research includes theoretical and empirical investigation of the GRC implementation within an enterprise and develops a framework for the analysis of the GRC adoption. The approach suggests that the three success factors (integration, optimisation, information) influence the adoption of the GRC and more specifically their implementation process. The proposed framework followed a case study approach to confirm its functionality and is evaluated through interviews with stakeholders involved in GRC implementations. Furthermore, it can be used by the organisations when considering the adoption of GRC solutions and can also suggest a tool for researchers to analyse and explain further the GRC implementation process.",
"title": ""
},
{
"docid": "712b5e8415e8460dbfc1ccdd92f647b0",
"text": "Named entity linking (NEL) grounds entity mentions to their corresponding Wikipedia article. State-of-the-art supervised NEL systems use features over the rich Wikipedia document and link-graph structure. Graph-based measures have been effective over WordNet for word sense disambiguation (WSD). We draw parallels between NEL and WSD, motivating our unsupervised NEL approach that exploits the Wikipedia article and category link graphs. Our system achieves 85.5% accuracy on the TAC 2010 shared task — competitive with the best supervised and unsupervised systems.",
"title": ""
},
{
"docid": "3071b8a720277f0ab203a40aade90347",
"text": "The Internet became an indispensable part of people's lives because of the significant role it plays in the ways individuals interact, communicate and collaborate with each other. Over recent years, social media sites succeed in attracting a large portion of online users where they become not only content readers but also content generators and publishers. Social media users generate daily a huge volume of comments and reviews related to different aspects of life including: political, scientific and social subjects. In general, sentiment analysis refers to the task of identifying positive and negative opinions, emotions and evaluations related to an article, news, products, services, etc. Arabic sentiment analysis is conducted in this study using a small dataset consisting of 1,000 Arabic reviews and comments collected from Facebook and Twitter social network websites. The collected dataset is used in order to conduct a comparison between two free online sentiment analysis tools: SocialMention and SentiStrength that support Arabic language. The results which based on based on the two of classifiers (Decision tree (J48) and SVM) showed that the SentiStrength is better than SocialMention tool.",
"title": ""
},
{
"docid": "2d0b0511f8f2ce41b7d2d60d57bc7236",
"text": "There is broad consensus that good outcome measures are needed to distinguish interventions that are effective from those that are not. This task requires standardized, patient-centered measures that can be administered at a low cost. We developed a questionnaire to assess short- and long-term patient-relevant outcomes following knee injury, based on the WOMAC Osteoarthritis Index, a literature review, an expert panel, and a pilot study. The Knee injury and Osteoarthritis Outcome Score (KOOS) is self-administered and assesses five outcomes: pain, symptoms, activities of daily living, sport and recreation function, and knee-related quality of life. In this clinical study, the KOOS proved reliable, responsive to surgery and physical therapy, and valid for patients undergoing anterior cruciate ligament reconstruction. The KOOS meets basic criteria of outcome measures and can be used to evaluate the course of knee injury and treatment outcome.",
"title": ""
},
{
"docid": "438a9e517a98c6f98f7c86209e601f1b",
"text": "One of the most challenging tasks in large-scale multi-label image retrieval is to map images into binary codes while preserving multilevel semantic similarity. Recently, several deep supervised hashing methods have been proposed to learn hash functions that preserve multilevel semantic similarity with deep convolutional neural networks. However, these triplet label based methods try to preserve the ranking order of images according to their similarity degrees to the queries while not putting direct constraints on the distance between the codes of very similar images. Besides, the current evaluation criteria are not able to measure the performance of existing hashing methods on preserving fine-grained multilevel semantic similarity. To tackle these issues, we propose a novel Deep Multilevel Semantic Similarity Preserving Hashing (DMSSPH) method to learn compact similarity-preserving binary codes for the huge body of multi-label image data with deep convolutional neural networks. In our approach, we make the best of the supervised information in the form of pairwise labels to maximize the discriminability of output binary codes. Extensive evaluations conducted on several benchmark datasets demonstrate that the proposed method significantly outperforms the state-of-the-art supervised and unsupervised hashing methods at the accuracies of top returned images, especially for shorter binary codes. Meanwhile, the proposed method shows better performance on preserving fine-grained multilevel semantic similarity according to the results under the Jaccard coefficient based evaluation criteria we propose.",
"title": ""
},
{
"docid": "c04cf54a40cd84961657bf50153ff68b",
"text": "Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.",
"title": ""
},
{
"docid": "2ebeaa9afb643f0f146ec4adc1604d60",
"text": "The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively.",
"title": ""
},
{
"docid": "577b0b3215fbd6a6b6fd0d8882967a1e",
"text": "Generating texts of different sentiment labels is getting more and more attention in the area of natural language generation. Recently, Generative Adversarial Net (GAN) has shown promising results in text generation. However, the texts generated by GAN usually suffer from the problems of poor quality, lack of diversity and mode collapse. In this paper, we propose a novel framework SentiGAN, which has multiple generators and one multi-class discriminator, to address the above problems. In our framework, multiple generators are trained simultaneously, aiming at generating texts of different sentiment labels without supervision. We propose a penalty based objective in the generators to force each of them to generate diversified examples of a specific sentiment label. Moreover, the use of multiple generators and one multi-class discriminator can make each generator focus on generating its own examples of a specific sentiment label accurately. Experimental results on four datasets demonstrate that our model consistently outperforms several state-of-the-art text generation methods in the sentiment accuracy and quality of generated texts.",
"title": ""
},
{
"docid": "6a2a7b5831f6b3608eb88f5ccda6d520",
"text": "In this paper we examine currently used programming contest systems. We discuss possible reasons why we do not expect any of the currently existing contest systems to be adopted by a major group of different programming contests. We suggest to approach the design of a contest system as a design of a secure IT system, using known methods from the area of computer",
"title": ""
},
{
"docid": "fca63f719115e863f5245f15f6b1be50",
"text": "Model-based testing (MBT) in hardware-in-the-loop (HIL) platform is a simulation and testing environment for embedded systems, in which test design automation provided by MBT is combined with HIL methodology. A HIL platform is a testing environment in which the embedded system under testing (SUT) assumes to be operating with real-world inputs and outputs. In this paper, we focus on presenting the novel methodologies and tools that were used to conduct the validation of the MBT in HIL platform. Another novelty of the validation approach is that it aims to provide a comprehensive and many-sided process view to validating MBT and HIL related systems including different component, integration and system level testing activities. The research is based on the constructive method of the related scientific literature and testing technologies, and the results are derived through testing and validating the implemented MBT in HIL platform. The used testing process indicated that the functionality of the constructed MBT in HIL prototype platform was validated.",
"title": ""
},
{
"docid": "e5f5aa53a90f482fb46a7f02bae27b20",
"text": "Machinima is a low-cost alternative to full production filmmaking. However, creating quality cinematic visualizations with existing machinima techniques still requires a high degree of talent and effort. We introduce a lightweight artificial intelligence system, Cambot, that can be used to assist in machinima production. Cambot takes a script as input and produces a cinematic visualization. Unlike other virtual cinematography systems, Cambot favors an offline algorithm coupled with an extensible library of specific modular and reusable facets of cinematic knowledge. One of the advantages of this approach to virtual cinematography is a tight coordination between the positions and movements of the camera and the actors.",
"title": ""
},
{
"docid": "5377e95300eef7496648b67749652988",
"text": "This paper introduces SDF-TAR: a real-time SLAM system based on volumetric registration in RGB-D data. While the camera is tracked online on the GPU, the most recently estimated poses are jointly refined on the CPU. We perform registration by aligning the data in limited-extent volumes anchored at salient 3D locations. This strategy permits efficient tracking on the GPU. Furthermore, the small memory load of the partial volumes allows for pose refinement to be done concurrently on the CPU. This refinement is performed over batches of a fixed number of frames, which are jointly optimized until the next batch becomes available. Thus drift is reduced during online operation, eliminating the need for any posterior processing. Evaluating on two public benchmarks, we demonstrate improved rotational motion estimation and higher reconstruction precision than related methods.",
"title": ""
},
{
"docid": "a7841eef3576876512d5f410e07380e8",
"text": "4 Abstract Accurate segmentation of 2-D, 3-D, and 4-D medical images to isolate 5 anatomical objects of interest for analysis is essential in almost any computer-aided 6 diagnosis system or other medical imaging applications. Various aspects of segmen7 tation features and algorithms have been extensively explored for many years in a 8 host of publications. However, the problem remains challenging, with no general and 9 unique solution, due to a large and constantly growing number of different objects of 10 interest, large variations of their properties in images, different medical imaging 11 modalities, and associated changes of signal homogeneity, variability, and noise for 12 each object. This chapter overviews most popular medical image segmentation 13 techniques and discusses their capabilities, and basic advantages and limitations. 14 The state-of-the-art techniques of the last decade are also outlined.",
"title": ""
},
{
"docid": "9db0e9b90db4d7fd9c0f268b5ee9b843",
"text": "Traditionally, the evaluation of surgical procedures in virtual reality (VR) simulators has been restricted to their individual technical aspects disregarding the procedures carried out by teams. However, some decision models have been proposed to support the collaborative training evaluation process of surgical teams in collaborative virtual environments. The main objective of this article is to present a collaborative simulator based on VR, named SimCEC, as a potential solution for education, training, and evaluation in basic surgical routines for teams of undergraduate students. The simulator considers both tasks performed individually and those carried in a collaborative manner. The main contribution of this work is to improve the discussion about VR simulators requirements (design and implementation) to provide team training in relevant topics, such as users’ feedback in real time, collaborative training in networks, interdisciplinary integration of curricula, and continuous evaluation.",
"title": ""
},
{
"docid": "88fb71e503e0d0af7515dd8489061e25",
"text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cfe1b91f879ab59b3afcfe2bf64c911e",
"text": "We consider a variant of the classical three-peg Tower of Hanoi problem, where limitations on the possible moves among the pegs are imposed. Each variant corresponds to a di-graph whose vertices are the pegs, and an edge from one vertex to another designates the ability of moving a disk from the first peg to the other, provided that the rules concerning the disk sizes are obeyed. There are five non-isomorphic graphs on three vertices, which are strongly connected—a sufficient condition for the existence of a solution to the problem. We provide optimal algorithms for the problem for all these graphs, and find the number of moves each requires.",
"title": ""
},
{
"docid": "1d354f59b9659785bd1548c756611647",
"text": "Phishing email is one of the major problems of today's Internet, resulting in financial losses for organizations and annoying individual users. Numerous approaches have been developed to filter phishing emails, yet the problem still lacks a complete solution. In this paper, we present a survey of the state of the art research on such attacks. This is the first comprehensive survey to discuss methods of protection against phishing email attacks in detail. We present an overview of the various techniques presently used to detect phishing email, at the different stages of attack, mostly focusing on machine-learning techniques. A comparative study and evaluation of these filtering methods is carried out. This provides an understanding of the problem, its current solution space, and the future research directions anticipated.",
"title": ""
},
{
"docid": "139ecd9ff223facaec69ad6532f650db",
"text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.",
"title": ""
},
{
"docid": "2b9733f936f39d0bb06b8f89a95f31e4",
"text": "In order to improve the three-dimensional (3D) exploration of virtual spaces above a tabletop, we developed a set of navigation techniques using a handheld magic lens. These techniques allow for an intuitive interaction with two-dimensional and 3D information spaces, for which we contribute a classification into volumetric, layered, zoomable, and temporal spaces. The proposed PaperLens system uses a tracked sheet of paper to navigate these spaces with regard to the Z-dimension (height above the tabletop). A formative user study provided valuable feedback for the improvement of the PaperLens system with respect to layer interaction and navigation. In particular, the problem of keeping the focus on selected layers was addressed. We also propose additional vertical displays in order to provide further contextual clues.",
"title": ""
}
] |
scidocsrr
|
0e68a5fe09cf5d71fe6b3b6bca92f8b0
|
Task switching in video game players: Benefits of selective attention but not resistance to proactive interference.
|
[
{
"docid": "b1151d3588dc4abff883bef8c60005d1",
"text": "Here, we demonstrate that action video game play enhances subjects' ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop in performance once a critical number of squares was reached. Importantly, this critical number was higher by about two items in video game players (VGPs) than in non-video game players (NVGPs). A following control study indicated that this improvement was not due to an enhanced ability to instantly apprehend the numerosity of the display, a process known as subitizing, but rather due to an enhancement in the slower more serial process of counting. To confirm that video game play facilitates the processing of multiple objects at once, we compared VGPs and NVGPs on the multiple object tracking task (MOT), which requires the allocation of attention to several items over time. VGPs were able to successfully track approximately two more items than NVGPs. Furthermore, NVGPs trained on an action video game established the causal effect of game playing in the enhanced performance on the two tasks. Together, these studies confirm the view that playing action video games enhances the number of objects that can be apprehended and suggest that this enhancement is mediated by changes in visual short-term memory skills.",
"title": ""
},
{
"docid": "040e5e800895e4c6f10434af973bec0f",
"text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.",
"title": ""
}
] |
[
{
"docid": "1ec9b98f0f7509088e7af987af2f51a2",
"text": "In this paper, we describe an automated learning approach to text categorization based on perception learning and a new feature selection metric, called correlation coefficient. Our approach has been teated on the standard Reuters text categorization collection. Empirical results indicate that our approach outperforms the best published results on this % uters collection. In particular, our new feature selection method yields comiderable improvement. We also investigate the usability of our automated hxu-n~ approach by actually developing a system that categorizes texts into a treeof categories. We compare tbe accuracy of our learning approach to a rrddmsed, expert system ap preach that uses a text categorization shell built by Cams gie Group. Although our automated learning approach still gives a lower accuracy, by appropriately inmrporating a set of manually chosen worda to use as f~ures, the combined, semi-automated approach yields accuracy close to the * baaed approach.",
"title": ""
},
{
"docid": "3092ba8df6080445f15382235ed63985",
"text": "The introduction of new technologies into vehicles has been imposing new forms of interaction, being a challenge to drivers but also to HMI research. The multiplicity of on-board systems in the market has been changing the driving task, being the consequences of such interaction a concern especially to older drivers. Several studies have been conducted to report the natural functional declines of older drivers and the way they cope with additional sources of information and additional tasks in specific moments. However, the evolution of these equipments, their frequent presence in the automotive market and also the increased acceptability and familiarization of older drivers with such technologies, compel researchers to consider other aspects of these interactions: from adaptation to the long term effects of using any in-vehicle technologies.",
"title": ""
},
{
"docid": "2e389715d9beb1bc7c9ab06131abc67a",
"text": "Digital forensic science is very much still in its infancy, but is becoming increasingly invaluable to investigators. A popular area for research is seeking a standard methodology to make the digital forensic process accurate, robust, and efficient. The first digital forensic process model proposed contains four steps: Acquisition, Identification, Evaluation and Admission. Since then, numerous process models have been proposed to explain the steps of identifying, acquiring, analysing, storage, and reporting on the evidence obtained from various digital devices. In recent years, an increasing number of more sophisticated process models have been proposed. These models attempt to speed up the entire investigative process or solve various of problems commonly encountered in the forensic investigation. In the last decade, cloud computing has emerged as a disruptive technological concept, and most leading enterprises such as IBM, Amazon, Google, and Microsoft have set up their own cloud-based services. In the field of digital forensic investigation, moving to a cloud-based evidence processing model would be extremely beneficial and preliminary attempts have been made in its implementation. Moving towards a Digital Forensics as a Service model would not only expedite the investigative process, but can also result in significant cost savings – freeing up digital forensic experts and law enforcement personnel to progress their caseload. This paper aims to evaluate the applicability of existing digital forensic process models and analyse how each of these might apply to a cloudbased evidence processing paradigm.",
"title": ""
},
{
"docid": "06ba6c64fd0f45f61e4c2ca20c41f9d7",
"text": "About ten years ago, the eld of range searching, especially simplex range searching, was wide open. At that time, neither e cient algorithms nor nontrivial lower bounds were known for most range-searching problems. A series of papers by Haussler and Welzl [161], Clarkson [88, 89], and Clarkson and Shor [92] not only marked the beginning of a new chapter in geometric searching, but also revitalized computational geometry as a whole. Led by these and a number of subsequent papers, tremendous progress has been made in geometric range searching, both in terms of developing e cient data structures and proving nontrivial lower bounds. From a theoretical point of view, range searching is now almost completely solved. The impact of general techniques developed for geometric range searching | \"-nets, 1=rcuttings, partition trees, multi-level data structures, to name a few | is evident throughout computational geometry. This volume provides an excellent opportunity to recapitulate the current status of geometric range searching and to summarize the recent progress in this area. Range searching arises in a wide range of applications, including geographic information systems, computer graphics, spatial databases, and time-series databases. Furthermore, a variety of geometric problems can be formulated as a range-searching problem. A typical range-searching problem has the following form. Let S be a set of n points in R , and let",
"title": ""
},
{
"docid": "4b432e49485b57ddb1921478f2917d4b",
"text": "Dynamic perturbations of reaching movements are an important technique for studying motor learning and adaptation. Adaptation to non-contacting, velocity-dependent inertial Coriolis forces generated by arm movements during passive body rotation is very rapid, and when complete the Coriolis forces are no longer sensed. Adaptation to velocity-dependent forces delivered by a robotic manipulandum takes longer and the perturbations continue to be perceived even when adaptation is complete. These differences reflect adaptive self-calibration of motor control versus learning the behavior of an external object or 'tool'. Velocity-dependent inertial Coriolis forces also arise in everyday behavior during voluntary turn and reach movements but because of anticipatory feedforward motor compensations do not affect movement accuracy despite being larger than the velocity-dependent forces typically used in experimental studies. Progress has been made in understanding: the common features that determine adaptive responses to velocity-dependent perturbations of jaw and limb movements; the transfer of adaptation to mechanical perturbations across different contact sites on a limb; and the parcellation and separate representation of the static and dynamic components of multiforce perturbations.",
"title": ""
},
{
"docid": "c90eae76dbde16de8d52170c2715bd7a",
"text": "Several literatures converge on the idea that approach and avoidance/withdrawal behaviors are managed by two partially distinct self-regulatory system. The functions of these systems also appear to be embodied in discrepancyreducing and -enlarging feedback loops, respectively. This article describes how the feedback construct has been used to address these two classes of action and the affective experiences that relate to them. Further discussion centers on the development of measures of individual differences in approach and avoidance tendencies, and how these measures can be (and have been) used as research tools, to investigate whether other phenomena have their roots in approach or avoidance.",
"title": ""
},
{
"docid": "45840f792b397da02fadc644d35faaf7",
"text": "Do there exist general principles, which any system must obey in order to achieve advanced general intelligence using feasible computational resources? Here we propose one candidate: “cognitive synergy,” a principle which suggests that general intelligences must contain different knowledge creation mechanisms corresponding to different sorts of memory (declarative, procedural, sensory/episodic, attentional, intentional); and that these different mechanisms must be interconnected in such a way as to aid each other in overcoming memory-type-specific combinatorial explosions.",
"title": ""
},
{
"docid": "16eebb268d8b3322c01fc0a24060b901",
"text": "Low cost depth sensors have been a huge success in the field of computer vision and robotics, providing depth images even in untextured environments. The same characteristic applies to the Kinect V2, a time-of-flight camera with high lateral resolution. In order to assess advantages of the new sensor over its predecessor for standard applications, we provide an analysis of measurement noise, accuracy and other error sources with the Kinect V2. We examined the raw sensor data by using an open source driver. Further insights on the sensor design and examples of processing techniques are given to completely exploit the unrestricted access to the device.",
"title": ""
},
{
"docid": "6570f9b4f8db85f40a99fb1911aa4967",
"text": "Honey bees have played a major role in the history and development of humankind, in particular for nutrition and agriculture. The most important role of the western honey bee (Apis mellifera) is that of pollination. A large amount of crops consumed throughout the world today are pollinated by the activity of the honey bee. It is estimated that the total value of these crops stands at 155 billion euro annually. The goal of the work outlined in this paper was to use wireless sensor network technology to monitor a colony within the beehive with the aim of collecting image and audio data. These data allows the beekeeper to obtain a much more comprehensive view of the in-hive conditions, an indication of flight direction, as well as monitoring the hive outside of the traditional beekeeping times, i.e. during the night, poor weather, and winter months. This paper outlines the design of a fully autonomous beehive monitoring system which provided image and sound monitoring of the internal chambers of the hive, as well as a warning system for emergency events such as possible piping, dramatically increased hive activity, or physical damage to the hive. The final design included three wireless nodes: a digital infrared camera with processing capabilities for collecting imagery of the hive interior; an external thermal imaging camera node for monitoring the colony status and activity, and an accelerometer and a microphone connected to an off the shelf microcontroller node for processing. The system allows complex analysis and sensor fusion. Some scenarios based on sound processing, image collection, and accelerometers are presented. Power management was implemented which allowed the system to achieve energy neutrality in an outdoor deployment with a 525 × 345 mm solar panel.",
"title": ""
},
{
"docid": "35e8a61fe4b87a1421d48dc583e69c57",
"text": "As one of the most popular micro-blogging services, Twitter attracts millions of users, producing millions of tweets daily. Shared information through this service spreads faster than would have been possible with traditional sources, however the proliferation of user-generation content poses challenges to browsing and finding valuable information. In this paper we propose a graph-theoretic model for tweet recommendation that presents users with items they may have an interest in. Our model ranks tweets and their authors simultaneously using several networks: the social network connecting the users, the network connecting the tweets, and a third network that ties the two together. Tweet and author entities are ranked following a co-ranking algorithm based on the intuition that that there is a mutually reinforcing relationship between tweets and their authors that could be reflected in the rankings. We show that this framework can be parametrized to take into account user preferences, the popularity of tweets and their authors, and diversity. Experimental evaluation on a large dataset shows that our model outperforms competitive approaches by a large margin.",
"title": ""
},
{
"docid": "301e061163b115126b8f0b9851ed265c",
"text": "Pressure ulcers are a common problem among older adults in all health care settings. Prevalence and incidence estimates vary by setting, ulcer stage, and length of follow-up. Risk factors associated with increased pressure ulcer incidence have been identified. Activity or mobility limitation, incontinence, abnormalities in nutritional status, and altered consciousness are the most consistently reported risk factors for pressure ulcers. Pain, infectious complications, prolonged and expensive hospitalizations, persistent open ulcers, and increased risk of death are all associated with the development of pressure ulcers. The tremendous variability in pressure ulcer prevalence and incidence in health care settings suggests that opportunities exist to improve outcomes for persons at risk for and with pressure ulcers.",
"title": ""
},
{
"docid": "99f7aa4a6e3111d18ccbb527d2a9f312",
"text": "This study investigates the development of trust in a Web-based vendor during two stages of a consumer’s Web experience: exploration and commitment. Through an experimental design, the study tests the effects of third party endorsements, reputation, and individual differences on trust in the vendor during these two stages.",
"title": ""
},
{
"docid": "681d0a6dcad967340cfb3ebe9cf7b779",
"text": "We demonstrate an integrated buck dc-dc converter for multi-V/sub CC/ microprocessors. At nominal conditions, the converter produces a 0.9-V output from a 1.2-V input. The circuit was implemented in a 90-nm CMOS technology. By operating at high switching frequency of 100 to 317 MHz with four-phase topology and fast hysteretic control, we reduced inductor and capacitor sizes by three orders of magnitude compared to previously published dc-dc converters. This eliminated the need for the inductor magnetic core and enabled integration of the output decoupling capacitor on-chip. The converter achieves 80%-87% efficiency and 10% peak-to-peak output noise for a 0.3-A output current and 2.5-nF decoupling capacitance. A forward body bias of 500 mV applied to PMOS transistors in the bridge improves efficiency by 0.5%-1%.",
"title": ""
},
{
"docid": "d89afd13098e50c52f8dfe7ddd1b8674",
"text": "The trends of the increasing middleboxes make the middle network more and more complex. Today, many middleboxes work on application layer and offer significant network services by the plain-text traffic, such as firewalling, intrusion detecting and application layer gateways. At the same time, more and more network applications are encrypting their data transmission to protect security and privacy. It is becoming a critical task and hot topic to continue providing application-layer middlebox services in the encrypted Internet, however, the state of the art is far from being able to be deployed in the real network. In this paper, we propose a practical architecture, named PlainBox, to enable session key sharing between the communication client and the middleboxes in the network path. It employs Attribute-Based Encryption (ABE) in the key sharing protocol to support multiple chaining middleboxes efficiently and securely. We develop a prototype system and apply it to popular security protocols such as TLS and SSH. We have tested our prototype system in a lab testbed as well as real-world websites. Our result shows PlainBox introduces very little overhead and the performance is practically deployable.",
"title": ""
},
{
"docid": "22c3eb9aa0127e687f6ebb6994fc8d1d",
"text": "In this paper, the novel inverse synthetic aperture secondary radar wireless positioning technique is introduced. The proposed concept allows for a precise spatial localization of a backscatter transponder even in dense multipath environments. A novel secondary radar signal evaluation concept compensates for the unknown modulation phase of the returned signal and thus leads to radar signals comparable to common primary radar. With use of this concept, inverse synthetic aperture radar algorithms can be applied to the signals of backscatter transponder systems. In simulations and first experiments, we used a broadband holographic reconstruction principle to realize the inverse synthetic aperture approach. The movement of the transponder along a short arbitrary aperture path is determined with assisting relative sensors (dead reckoning or inertia sensors). A set of signals measured along the aperture is adaptively focused to the transponder position. By this focusing technique, multipath reflections can be suppressed impressively and a precise indoor positioning becomes feasible. With our technique, completely new and powerful options for integrated navigation and sensor fusion in RF identification systems and wireless local positioning systems are now possible.",
"title": ""
},
{
"docid": "cfbadb48bd915aa0b3e906abce670cdc",
"text": "Traceability defined as the ability to trace dependent items within a model and the ability to trace correspondent items in other models is advocated as a desirable property of a software development process. Potential benefits of good traceability are clearer documentation, more focussed development, increased ease of system understanding, and more precise impact analysis of proposed changes. An industry-scale project applying the analysis and design method Objectory has been examined and documented with a number of traceability examples generated from the perspective of a maintainer attempting to understand the system. Four representative examples and a categorization of traceability are presented in this paper in order to provide a concrete empirical basis for the application of traceability to systems development.",
"title": ""
},
{
"docid": "f55f9174b70196e912c0cbe477ada467",
"text": "This paper studies the use of structural representations for learning relations between pairs of short texts (e.g., sentences or paragraphs) of the kind: the second text answers to, or conveys exactly the same information of, or is implied by, the first text. Engineering effective features that can capture syntactic and semantic relations between the constituents composing the target text pairs is rather complex. Thus, we define syntactic and semantic structures representing the text pairs and then apply graph and tree kernels to them for automatically engineering features in Support Vector Machines. We carry out an extensive comparative analysis of stateof-the-art models for this type of relational learning. Our findings allow for achieving the highest accuracy in two different and important related tasks, i.e., Paraphrasing Identification and Textual Entailment Recognition.",
"title": ""
},
{
"docid": "2d6d5c8b1ac843687db99ccf50a0baff",
"text": "This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier’s behaviour.",
"title": ""
},
{
"docid": "b0b4fe1bfe64f306895f2cfc28d50415",
"text": "Background\nFollowing news of deaths in two districts of Jharkhand (West Singhbum and Garhwa) in November 2016, epidemiological investigations were contemplated to investigate any current outbreak of falciparum malaria and deaths attributed to it.\n\n\nMethodology\nThe epidemiological investigations, verbal autopsy of suspected deaths attributed to malaria and keys interviews were conducted in the 2nd and 4th week of November 2016 in Khuntpani and Dhurki block of West Singhbum and Garhwa districts, respectively, following a strict protocol.\n\n\nResults\nThe affected villages were Argundi and Korba-Pahariya and their adjacent tolas in Khuntpani and Dhurki block. Undoubtedly, there was the continuous transmission of falciparum malaria in both the regions in October and November 2016. The total cases (according to case definitions) were 1002, of them, 338 and 12 patients were positive for Plasmodium falciparum positive (Pf +ve) and Plasmodium vivax positive (Pv +ve), respectively, in the affected areas of Khuntpani block. In Dhurki block, out of the total of 631 patients fulfilling the case definition, 65 patients were PF +ve and 23 Pv +ve. Comparing to the last year, there is remarkably high number of falciparum cases. Verbal autopsy of deceased individuals showed that malaria might be one of the strongly probable diagnoses, but not conclusively.\n\n\nConclusion\nAccording to epidemiological investigation, verbal autopsy and key interviews conducted, it may be concluded that there is a definite outbreak of falciparum malaria in the area and environment is congenial for malaria and other tropical diseases.",
"title": ""
},
{
"docid": "739aaf487d6c5a7b7fe9d0157d530382",
"text": "A blockchain framework is presented for addressing the privacy and security challenges associated with the Big Data in smart mobility. It is composed of individuals, companies, government and universities where all the participants collect, own, and control their data. Each participant shares their encrypted data to the blockchain network and can make information transactions with other participants as long as both party agrees to the transaction rules (smart contract) issued by the owner of the data. Data ownership, transparency, auditability and access control are the core principles of the proposed blockchain for smart mobility Big Data.",
"title": ""
}
] |
scidocsrr
|
295bb8dfa090c9770be33ae18c576a26
|
SmartEscape: A Mobile Smart Individual Fire Evacuation System Based on 3D Spatial Model
|
[
{
"docid": "56e1778df9d5b6fa36cbf4caae710e67",
"text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.",
"title": ""
},
{
"docid": "5546f93f4c10681edb0fdfe3bf52809c",
"text": "The current applications of neural networks to in vivo medical imaging and signal processing are reviewed. As is evident from the literature neural networks have already been used for a wide variety of tasks within medicine. As this trend is expected to continue this review contains a description of recent studies to provide an appreciation of the problems associated with implementing neural networks for medical imaging and signal processing.",
"title": ""
}
] |
[
{
"docid": "d4488867e774e28abc2b960a9434d052",
"text": "Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"title": ""
},
{
"docid": "aac3becdb57fd0488fb3046af4ac95da",
"text": "We introduced some of the basic principles, techniques, and key design issues common to ISPs used in many diverse scientific, military, and commercial applications, and we touched on some of the less intuitive effects that must be dealt with. Many of these effects can influence the initial configuration of the system as well as design details later in the design process. The successful design of an ISP usually requires a multidisciplinary design team. The design of an ISP must often be closely coordinated with that of other major subsystems such as the primary sensor and the optics. The role of the systems engineer in the design process is perhaps the most critical because other members of the team may not be aware of the consequences of many of the effects discussed above. Inertially stabilized platforms (ISPs) are used to stabilize and point a broad array of sensors, cameras, telescopes, and weapon systems.",
"title": ""
},
{
"docid": "bc890d9ecf02a89f5979053444daebdf",
"text": "The continued growth of mobile and interactive computing requires devices manufactured with low-cost processes, compatible with large-area and flexible form factors, and with additional functionality. We review recent advances in the design of electronic and optoelectronic devices that use colloidal semiconductor quantum dots (QDs). The properties of materials assembled of QDs may be tailored not only by the atomic composition but also by the size, shape, and surface functionalization of the individual QDs and by the communication among these QDs. The chemical and physical properties of QD surfaces and the interfaces in QD devices are of particular importance, and these enable the solution-based fabrication of low-cost, large-area, flexible, and functional devices. We discuss challenges that must be addressed in the move to solution-processed functional optoelectronic nanomaterials.",
"title": ""
},
{
"docid": "1a3357aff8569e691f619a5ace483585",
"text": "Mesenchymal stromal cells (MSCs) are explored as a novel treatment for a variety of medical conditions. Their fate after infusion is unclear, and long-term safety regarding malignant transformation and ectopic tissue formation has not been addressed in patients. We examined autopsy material from 18 patients who had received human leukocyte antigen (HLA)-mismatched MSCs, and 108 tissue samples from 15 patients were examined by PCR. No signs of ectopic tissue formation or malignant tumors of MSC-donor origin were found on macroscopic or histological examination. MSC donor DNA was detected in one or several tissues including lungs, lymph nodes, and intestine in eight patients at levels from 1/100 to <1/1,000. Detection of MSC donor DNA was negatively correlated with time from infusion to sample collection, as DNA was detected from nine of 13 MSC infusions given within 50 days before sampling but from only two of eight infusions given earlier. There was no correlation between MSC engraftment and treatment response. We conclude that MSCs appear to mediate their function through a \"hit and run\" mechanism. The lack of sustained engraftment limits the long-term risks of MSC therapy.",
"title": ""
},
{
"docid": "333b21433d17a9d271868e203c8a9481",
"text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).",
"title": ""
},
{
"docid": "997a0392359ae999dfca6a0d339ea27f",
"text": "Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined. One type of anomaly, for example, concerns the fact that, with certain reference strings and paging algorithms, an increase in mean memory allocation may result in an increase in fault rate. Two paging algorithms, the page fault frequency and working set algorithms, are examined in terms of their anomaly potential, and reference string examples of various anomalies are presented. Two paging algorithm properties, the inclusion property and the generalized inclusion property, are discussed and the anomaly implications of these properties presented.",
"title": ""
},
{
"docid": "699836a5b2caf6acde02c4bad16c2795",
"text": "Drilling end-effector is a key unit in autonomous drilling robot. The perpendicularity of the hole has an important influence on the quality of airplane assembly. Aiming at the robot drilling perpendicularity, a micro-adjusting attitude mechanism and a surface normal measurement algorithm are proposed in this paper. In the mechanism, two rounded eccentric discs are used and the small one is embedded in the big one, which makes the drill’s point static when adjusting the drill’s attitude. Thus, removal of drill’s point position after adjusting the drill attitude can be avoided. Before the micro-adjusting progress, four non-coplanar points in space are used to determine a unique sphere. The normal at the drilling point is measured by four laser ranging sensors. The adjusting angles at which the motors should be rotated to adjust attitude can be calculated by using the deviation between the normal and the drill axis. Finally, the motors will drive the two eccentric discs to achieve micro-adjusting progress. Experiments on drilling robot system and the results demonstrate that the adjusting mechanism and the algorithm for surface normal measurement are effective with high accuracy and efficiency. (1)设计一种微型姿态调整机构, 实现对钻头姿态进行调整, 使其沿制孔点法线进行制孔, 提高孔的垂直度. 使得钻头调整前后, 钻头顶点保持不变, 提高制孔效率. (2)利用4个激光测距传感器, 根据空间不共面四点确定唯一球, 测得制孔点处的法线向量, 为钻头的姿态调整做准备.",
"title": ""
},
{
"docid": "9bdddbd6b3619aa4c23566eea33b4ff7",
"text": "This was a prospective controlled study to compare the beneficial effects of office microlaparoscopic ovarian drilling (OMLOD) under augmented local anesthesia, as a new modality treatment option, compared to those following ovarian drilling with the conventional traditional 10-mm laparoscope (laparoscopic ovarian drilling, LOD) under general anesthesia. The study included 60 anovulatory women with polycystic ovary syndrome (PCOS) who underwent OMLOD (study group) and 60 anovulatory PCOS women, in whom conventional LOD using 10-mm laparoscope under general anesthesia was performed (comparison group). Transvaginal ultrasound scan and blood sampling to measure the serum concentrations of LH, FSH, testosterone and androstenedione were performed before and after the procedure. Intraoperative and postoperative pain scores in candidate women were evaluated during the office microlaparoscopic procedure, in addition to the number of candidates who needed extra analgesia. Women undergoing OMLOD showed good intraoperative and postoperative pain scores. The number of patients discharged within 2 h after the office procedure was significantly higher, without the need for postoperative analgesia in most patients. The LH:FSH ratio, mean serum concentrations of LH and testosterone and free androgen index decreased significantly after both OMLOD and LOD. The mean ovarian volume decreased significantly (P < 0.05) a year after both OMLOD and LOD. There were no significant differences in those results after both procedures. Intra- and postoperatively augmented local anesthesia allows outpatient bilateral ovarian drilling by microlaparoscopy without general anesthesia. The high pregnancy rate, the simplicity of the method and the faster discharge time offer a new option for patients with PCOS who are resistant to clomiphene citrate. Moreover, ovarian drilling could be performed simultaneously during the routine diagnostic microlaparoscopy and integrated into the fertility workup of these patients.",
"title": ""
},
{
"docid": "b753eb752d4f87dbff82d77e8417f389",
"text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:",
"title": ""
},
{
"docid": "06465bde1eb562e90e609a31ed2dfe70",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices.",
"title": ""
},
{
"docid": "124aa82a1006890c736968fd6ae62464",
"text": "A modern power grid needs to become smarter in order to provide an affordable, reliable, and sustainable supply of electricity. For these reasons, considerable activity has been carried out in the United States and Europe to formulate and promote a vision for the development of future smart power grids. However, the majority of these activities emphasized only the distribution grid and demand side leaving the big picture of the transmission grid in the context of smart grids unclear. This paper presents a unique vision for the future of smart transmission grids in which their major features are identified. In this vision, each smart transmission grid is regarded as an integrated system that functionally consists of three interactive, smart components, i.e., smart control centers, smart transmission networks, and smart substations. The features and functions of each of the three functional components, as well as the enabling technologies to achieve these features and functions, are discussed in detail in the paper.",
"title": ""
},
{
"docid": "4c96561217bb77cf7ca899fbba06bbde",
"text": "The state of advice given to people today on how to stay safe online has plenty of room for improvement. Too many things are asked of them, which may be unrealistic, time consuming, or not really worth the effort. To improve the security advice, our community must find out what practices people use and what recommendations, if messaged well, are likely to bring the highest benefit while being realistic to ask of people. In this paper, we present the results of a study which aims to identify which practices people do that they consider most important at protecting their security online. We compare self-reported security practices of non-experts to those of security experts (i.e., participants who reported having five or more years of experience working in computer security). We report on the results of two online surveys—one with 231 security experts and one with 294 MTurk participants—on what the practices and attitudes of each group are. Our findings show a discrepancy between the security practices that experts and non-experts report taking. For instance, while experts most frequently report installing software updates, using two-factor authentication and using a password manager to stay safe online, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently.",
"title": ""
},
{
"docid": "3c1297b61456db30faefefc19bc079bd",
"text": "The present paper examined the structure of Dutch adolescents’ music preferences, the stability of music preferences and the relations between Big-Five personality characteristics and (changes in) music preferences. Exploratory and confirmatory factor analyses of music-preference data from 2334 adolescents aged 12–19 revealed four clearly interpretable music-preference dimensions: Rock, Elite, Urban and Pop/Dance. One thousand and forty-four randomly selected adolescents from the original sample filled out questionnaires on music preferences and personality at three follow-up measurements. In addition to being relatively stable over 1, 2 and 3-year intervals, music preferences were found to be consistently related to personality characteristics, generally confirming prior research in the United States. Personality characteristics were also found to predict changes in music preferences over a 3-year interval. Copyright # 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "3688c987419daade77c44912fbc72ecf",
"text": "We propose a visual food recognition framework that integrates the inherent semantic relationships among fine-grained classes. Our method learns semantics-aware features by formulating a multi-task loss function on top of a convolutional neural network (CNN) architecture. It then refines the CNN predictions using a random walk based smoothing procedure, which further exploits the rich semantic information. We evaluate our algorithm on a large \"food-in-the-wild\" benchmark, as well as a challenging dataset of restaurant food dishes with very few training images. The proposed method achieves higher classification accuracy than a baseline which directly fine-tunes a deep learning network on the target dataset. Furthermore, we analyze the consistency of the learned model with the inherent semantic relationships among food categories. Results show that the proposed approach provides more semantically meaningful results than the baseline method, even in cases of mispredictions.",
"title": ""
},
{
"docid": "be19dab37fdd4b6170816defbc550e2e",
"text": "A new continuous transverse stub (CTS) antenna array is presented in this paper. It is built using the substrate integrated waveguide (SIW) technology and designed for beam steering applications in the millimeter waveband. The proposed CTS antenna array consists of 18 stubs that are arranged in the SIW perpendicular to the wave propagation. The performance of the proposed CTS antenna array is demonstrated through simulation and measurement results. From the experimental results, the peak gain of 11.63-16.87 dBi and maximum radiation power of 96.8% are achieved in the frequency range 27.06-36 GHz with low cross-polarization level. In addition, beam steering capability is achieved in the maximum radiation angle range varying from -43° to 3 ° depending on frequency.",
"title": ""
},
{
"docid": "98e7313ee26e70447b9366ff14b74605",
"text": "We explore blindfold (question-only) baselines for Embodied Question Answering. The EmbodiedQA task requires an agent to answer a question by intelligently navigating in a simulated environment, gathering necessary visual information only through first-person vision before finally answering. Consequently, a blindfold baseline which ignores the environment and visual information is a degenerate solution, yet we show through our experiments on the EQAv1 dataset that a simple question-only baseline achieves state-of-the-art results on the EmbodiedQA task in all cases except when the agent is spawned extremely close to the object.",
"title": ""
},
{
"docid": "25ccaa5a71d0a3f46296c59328e0b9b5",
"text": "Real-world social networks from a variety of domains can naturally be modelled as dynamic graphs. However, approaches to detecting communities have largely focused on identifying communities in static graphs. Recently, researchers have begun to consider the problem of tracking the evolution of groups of users in dynamic scenarios. Here we describe a model for tracking the progress of communities over time in a dynamic network, where each community is characterised by a series of significant evolutionary events. This model is used to motivate a community-matching strategy for efficiently identifying and tracking dynamic communities. Evaluations on synthetic graphs containing embedded events demonstrate that this strategy can successfully track communities over time in volatile networks. In addition, we describe experiments exploring the dynamic communities detected in a real mobile operator network containing millions of users.",
"title": ""
},
{
"docid": "10b16932bb8c1d85f759c181da6e5407",
"text": "Many explanations of both proand anti-social behaviors in computer-mediated communication (CMC) appear to hinge on changes in individual self-awareness. In spite of this, little research has been devoted to understanding the effects of self-awareness in CMC. To fill this void, this study examined the effects of individuals public and private self-awareness in anonymous, time-restricted, and synchronous CMC. Two experiments were conducted. A pilot experiment tested and confirmed the effectiveness of using a Web camera combined with an alleged online audience to enhance users public self-awareness. In the main study users private and public self-awareness were manipulated in a crossed 2 · 2 factorial design. Pairs of participants completed a Desert Survival Problem via a synchronous, text-only chat program. After the task, they evaluated each other on intimacy, task/social orientation, formality, politeness, attraction, and group identification. The results suggest that a lack of private and public self-awareness does not automatically lead to impersonal tendencies in CMC as deindividuation perspectives of CMC would argue. Moreover, participants in this study were able to form favorable impressions in a completely anonymous environment based on brief interaction, which lends strong support to the idealization proposed by hyperpersonal theory. Findings are used to modify and extend current theoretical perspectives on CMC. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f5f76c0903f71e2c54954f5705db13b5",
"text": "The English cognate object (CO) construction like laugh a nervous laugh raises intriguing analytic and empirical questions. They include (a) what kind of verb licenses the CO, (b) what is the grammatical status of the CO (including its argumenthood), and (c) what are the semantic/pragmatic contributions of the construction? In answering these questions and to see real usages of the construction, in this paper we have investigated English corpora like the COCA (Corpus of Contemporary American English) and suggest a lexicalist perspective. In particular, we assume that there are two different types of the construction, EVENTIVE-CO and REFERENTIAL-CO, based on the object’s referential property. This difference in the referential power leads to many syntactic differences between the two types. In addition, we show that the uses of the CO selecting verbs are much more flexible than the literature has suggested. As a way of accounting for these variations, we sketch a Construction Grammar view in which argument structure constructions, lexical semantics, and constructional constraints are all interacting together to license the construction in question.",
"title": ""
},
{
"docid": "5ec47bf6ab665012fc321e41634c8b7b",
"text": "This paper presents an extensive indoor radio propagation characteristics at 28 GHz office environment. A full 3D ray tracing simulation results are compared with measurement results and features high correlation. Means of differences between simulation and measurement are 5.13 dB for antenna 1 and 4.51 dB for antenna 2, and standard deviations are 4.03 dB and 3.11 dB. Furthermore novel passive repeaters in both indoor and outdoor environments are presented and compared. The ray tracing simulation procedures for repeaters are introduced and the simulation results are well matched with measured results.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.