query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
73e3e8d381b2ce9015d0c43fd6812524
Glassbox: Dynamic Analysis Platform for Malware Android Applications on Real Devices
[ { "docid": "5184c27b7387a0cbedb1c3a393f797fa", "text": "Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.", "title": "" }, { "docid": "35dda21bd1f2c06a446773b0bfff2dd7", "text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.", "title": "" } ]
[ { "docid": "fdbf20917751369d7ffed07ecedc9722", "text": "In order to evaluate the effect of static magnetic field (SMF) on morphological and physiological responses of soybean to water stress, plants were grown under well-watered (WW) and water-stress (WS) conditions. The adverse effects of WS given at different growth stages was found on growth, yield, and various physiological attributes, but WS at the flowering stage severely decreased all of above parameters in soybean. The result indicated that SMF pretreatment to the seeds significantly increased the plant growth attributes, biomass accumulation, and photosynthetic performance under both WW and WS conditions. Chlorophyll a fluorescence transient from SMF-treated plants gave a higher fluorescence yield at J–I–P phase. Photosynthetic pigments, efficiency of PSII, performance index based on absorption of light energy, photosynthesis, and nitrate reductase activity were also higher in plants emerged from SMF-pretreated seeds which resulted in an improved yield of soybean. Thus SMF pretreatment mitigated the adverse effects of water stress in soybean.", "title": "" }, { "docid": "defbecacc15af7684a6f9722349f42e3", "text": "We present a novel, unsupervised, and distance measure agnostic method for search space reduction in spell correction using neural character embeddings. The embeddings are learned by skip-gram word2vec training on sequences generated from dictionary words in a phonetic informationretentive manner. We report a very high performance in terms of both success rates and reduction of search space on the Birkbeck spelling error corpus. To the best of our knowledge, this is the first application of word2vec to spell correction.", "title": "" }, { "docid": "840999138cfa5714894d3fcd63401c0f", "text": "Due to the \"curse of dimensionality\" problem, it is very expensive to process the nearest neighbor (NN) query in high-dimensional spaces; and hence, approximate approaches, such as Locality-Sensitive Hashing (LSH), are widely used for their theoretical guarantees and empirical performance. Current LSH-based approaches target at the L1 and L2 spaces, while as shown in previous work, the fractional distance metrics (Lp metrics with 0 < p < 1) can provide more insightful results than the usual L1 and L2 metrics for data mining and multimedia applications. However, none of the existing work can support multiple fractional distance metrics using one index. In this paper, we propose LazyLSH that answers approximate nearest neighbor queries for multiple Lp metrics with theoretical guarantees. Different from previous LSH approaches which need to build one dedicated index for every query space, LazyLSH uses a single base index to support the computations in multiple Lp spaces, significantly reducing the maintenance overhead. Extensive experiments show that LazyLSH provides more accurate results for approximate kNN search under fractional distance metrics.", "title": "" }, { "docid": "8d91b88e9f57181e9c5427b8578bc322", "text": "AIM\n This paper reports on a study that looked at the characteristics of exemplary nurse leaders in times of change from the perspective of frontline nurses.\n\n\nBACKGROUND\n Large-scale changes in the health care system and their associated challenges have highlighted the need for strong leadership at the front line.\n\n\nMETHODS\n In-depth personal interviews with open-ended questions were the primary means of data collection. The study identified and explored six frontline nurses' perceptions of the qualities of nursing leaders through qualitative content analysis. This study was validated by results from the current literature.\n\n\nRESULTS\n The frontline nurses described several common characteristics of exemplary nurse leaders, including: a passion for nursing; a sense of optimism; the ability to form personal connections with their staff; excellent role modelling and mentorship; and the ability to manage crisis while guided by a set of moral principles. All of these characteristics pervade the current literature regarding frontline nurses' perspectives on nurse leaders.\n\n\nCONCLUSION\n This study identified characteristics of nurse leaders that allowed them to effectively assist and support frontline nurses in the clinical setting.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n The findings are of significance to leaders in the health care system and in the nursing profession who are in a position to foster development of leaders to mentor and encourage frontline nurses.", "title": "" }, { "docid": "21d65dfd7d864520584cfcdb605ebdb0", "text": "Statistical debugging aims to automate the process of isolating bugs by profiling several runs of the program and using statistical analysis to pinpoint the likely causes of failure. In this paper, we investigate the impact of using richer program profiles such as path profiles on the effectiveness of bug isolation. We describe a statistical debugging tool called HOLMES that isolates bugs by finding paths that correlate with failure. We also present an adaptive version of HOLMES that uses iterative, bug-directed profiling to lower execution time and space overheads. We evaluate HOLMES using programs from the SIR benchmark suite and some large, real-world applications. Our results indicate that path profiles can help isolate bugs more precisely by providing more information about the context in which bugs occur. Moreover, bug-directed profiling can efficiently isolate bugs with low overheads, providing a scalable and accurate alternative to sparse random sampling.", "title": "" }, { "docid": "a4933829bafd2d1e7c3ae3a9ab50c165", "text": "Head drop is a symptom commonly seen in patients with amyotrophic lateral sclerosis. These patients usually experience neck pain and have difficulty in swallowing and breathing. Static neck braces are used in current treatment. These braces, however, immobilize the head in a single configuration, which causes muscle atrophy. This letter presents the design of a dynamic neck brace for the first time in the literature, which can both measure and potentially assist in the head motion of the human user. This letter introduces the brace design method and validates its capability to perform measurements. The brace is designed based on kinematics data collected from a healthy individual via a motion capture system. A pilot study was conducted to evaluate the wearability of the brace and the accuracy of measurements with the brace. This study recruited ten participants who performed a series of head motions. The results of this human study indicate that the brace is wearable by individuals who vary in size, the brace allows nearly $70\\%$ of the overall range of head rotations, and the sensors on the brace give accurate motion of the head with an error of under $5^{\\circ }$ when compared to a motion capture system. We believe that this neck brace can be a valid and accurate measurement tool for human head motion. This brace will be a big improvement in the available technologies to measure head motion as these are currently done in the clinic using hand-held protractors in two orthogonal planes.", "title": "" }, { "docid": "70d901bae1e40dc5c585ae1f73c00776", "text": "Sexual abuse includes any activity with a child, before the age of legal consent, that is for the sexual gratification of an adult or a significantly older child. Sexual mistreatment of children by family members (incest) and nonrelatives known to the child is the most common type of sexual abuse. Intrafamiliar sexual abuse is difficult to document and manage, because the child must be protected from additional abuse and coercion not to reveal or to deny the abuse, while attempts are made to preserve the family unit. The role of a comprehensive forensic medical examination is of major importance in the full investigation of such cases and the building of an effective prosecution in the court. The protection of the sexually abused child from any additional emotional trauma during the physical examination is of great importance. A brief assessment of the developmental, behavioral, mental and emotional status should also be obtained. The physical examination includes inspection of the whole body with special attention to the mouth, breasts, genitals, perineal region, buttocks and anus. The next concern for the doctor is the collection of biologic evidence, provided that the alleged sexual abuse has occurred within the last 72 hours. Cultures and serologic tests for sexually transmitted diseases are decided by the doctor according to the special circumstances of each case. Pregnancy test should also be performed in each case of a girl in reproductive age.", "title": "" }, { "docid": "9403a8cb9c0d0d2a7f7634785b9fdab3", "text": "Images have become one of the most popular types of media through which users convey their emotions within online social networks. Although vast amount of research is devoted to sentiment analysis of textual data, there has been very limited work that focuses on analyzing sentiment of image data. In this work, we propose a novel visual sentiment prediction framework that performs image understanding with Convolutional Neural Networks (CNN). Specifically, the proposed sentiment prediction framework performs transfer learning from a CNN with millions of parameters, which is pre-trained on large-scale data for object recognition. Experiments conducted on two real-world datasets from Twitter and Tumblr demonstrate the effectiveness of the proposed visual sentiment analysis framework.", "title": "" }, { "docid": "6b6fd5bfbe1745a49ce497490cef949d", "text": "This paper investigates optimal power allocation strategies over a bank of independent parallel Gaussian wiretap channels where a legitimate transmitter and a legitimate receiver communicate in the presence of an eavesdropper and an unfriendly jammer. In particular, we formulate a zero-sum power allocation game between the transmitter and the jammer where the payoff function is the secrecy rate. We characterize the optimal power allocation strategies as well as the Nash equilibrium in some asymptotic regimes. We also provide a set of results that cast further insight into the problem. Our scenario, which is applicable to current OFDM communications systems, demonstrates that transmitters that adapt to jammer experience much higher secrecy rates than non-adaptive transmitters.", "title": "" }, { "docid": "f70dc802c631c4bda7de2de78217411a", "text": "Researchers, technology reviewers, and governmental agencies have expressed concern that automation may necessitate the introduction of added displays to indicate vehicle intent in vehicle-to-pedestrian interactions. An automated online methodology for obtaining communication intent perceptions for 30 external vehicle-to-pedestrian display concepts was implemented and tested using Amazon Mechanic Turk. Data from 200 qualified participants was quickly obtained and processed. In addition to producing a useful early-stage evaluation of these specific design concepts, the test demonstrated that the methodology is scalable so that a large number of design elements or minor variations can be assessed through a series of runs even on much larger samples in a matter of hours. Using this approach, designers should be able to refine concepts both more quickly and in more depth than available development resources typically allow. Some concerns and questions about common assumptions related to the implementation of vehicle-to-pedestrian displays are posed.", "title": "" }, { "docid": "3f0a9507d6538827faa5a42e87dc2115", "text": "Traditional machine learning requires data to be described by attributes prior to applying a learning algorithm. In text classification tasks, many feature engineering methodologies have been proposed to extract meaningful features, however, no best practice approach has emerged. Traditional methods of feature engineering have inherent limitations due to loss of information and the limits of human design. An alternative is to use deep learning to automatically learn features from raw text data. One promising deep learning approach is to use convolutional neural networks. These networks can learn abstract text concepts from character representations and be trained to perform discriminate tasks, such as classification. In this paper, we propose a new approach to encoding text for use with convolutional neural networks that greatly reduces memory requirements and training time for learning from character-level text representations. Additionally, this approach scales well with alphabet size allowing us to preserve more information from the original text, potentially enhancing classification performance. By training tweet sentiment classifiers, we demonstrate that our approach uses less computational resources, allows faster training for networks and achieves similar, or better performance compared to the previous method of character encoding.", "title": "" }, { "docid": "5ebf60a0f113ec60c4f9f3c2089e86cb", "text": "A rapidly burgeoning literature documents copious sex influences on brain anatomy, chemistry and function. This article highlights some of the more intriguing recent discoveries and their implications. Consideration of the effects of sex can help to explain seemingly contradictory findings. Research into sex influences is mandatory to fully understand a host of brain disorders with sex differences in their incidence and/or nature. The striking quantity and diversity of sex-related influences on brain function indicate that the still widespread assumption that sex influences are negligible cannot be justified, and probably retards progress in our field.", "title": "" }, { "docid": "be43ca444001f766e14dd042c411a34f", "text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step towards this end by characterizing the operational performance of a tier-1 cellular network in the United States during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 seconds shorter RRC timeouts as compared to routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events; and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.", "title": "" }, { "docid": "27cb4869713ddbd3100fd4ca89002cfb", "text": "Simulations of Very-low-frequency (VLF) transmitter signals are conducted using three models: the long-wave propagation capability, a finite-difference (FD) time-domain model, and an FD frequency-domain model. The FD models are corrected using Richardson extrapolation to minimize the numerical dispersion inherent in these models. Using identical ionosphere and ground parameters, the three models are shown to agree very well in their simulated VLF signal amplitude and phase, to within 1 dB of amplitude and a few degrees of phase, for a number of different simulation paths and transmitter frequencies. Furthermore, the three models are shown to produce comparable phase changes for the same ionosphere perturbations, again to within a few degrees. Finally, we show that the models reproduce the phase data of existing VLF transmitter–receiver pairs reasonably well, although the nighttime variation in the measured phase data is not captured by the simplified characterization of the ionosphere.", "title": "" }, { "docid": "27b5cf1967c6dc0a91d04565ae5dbf70", "text": "Crowdsourcing provides a popular paradigm for data collection at scale. We study the problem of selecting subsets of workers from a given worker pool to maximize the accuracy under a budget constraint. One natural question is whether we should hire as many workers as the budget allows, or restrict on a small number of topquality workers. By theoretically analyzing the error rate of a typical setting in crowdsourcing, we frame the worker selection problem into a combinatorial optimization problem and propose an algorithm to solve it efficiently. Empirical results on both simulated and real-world datasets show that our algorithm is able to select a small number of high-quality workers, and performs as good as, sometimes even better than, the much larger crowds as the budget allows.", "title": "" }, { "docid": "fb0fdbdff165a83671dd9373b36caac4", "text": "In this paper, we propose a system, that automatically transfers human body motion captured from an ordinary video camera to an unknown 3D character mesh. In our system, no manual intervention is required for specifying the internal skeletal structure or defining how the mesh surfaces deform. A sparse graph is generated from the input polygons based on their connectivity and geometric distributions. To estimate articulated body parts in the video, a progressive particle filter is used for identifying correspondences. We anticipate our proposed system can bring animation to a new audience with a more intuitive user interface.", "title": "" }, { "docid": "296dcc0d1959823d1b5dce85e1263ef2", "text": "BACKGROUND\nViolence against women is a serious human rights abuse and public health issue. Despite growing evidence of the size of the problem, current evidence comes largely from industrialised settings, and methodological differences limit the extent to which comparisons can be made between studies. We aimed to estimate the extent of physical and sexual intimate partner violence against women in 15 sites in ten countries: Bangladesh, Brazil, Ethiopia, Japan, Namibia, Peru, Samoa, Serbia and Montenegro, Thailand, and the United Republic of Tanzania.\n\n\nMETHODS\nStandardised population-based household surveys were done between 2000 and 2003. Women aged 15-49 years were interviewed and those who had ever had a male partner were asked in private about their experiences of physically and sexually violent and emotionally abusive acts.\n\n\nFINDINGS\n24,097 women completed interviews, with around 1500 interviews per site. The reported lifetime prevalence of physical or sexual partner violence, or both, varied from 15% to 71%, with two sites having a prevalence of less than 25%, seven between 25% and 50%, and six between 50% and 75%. Between 4% and 54% of respondents reported physical or sexual partner violence, or both, in the past year. Men who were more controlling were more likely to be violent against their partners. In all but one setting women were at far greater risk of physical or sexual violence by a partner than from violence by other people.\n\n\nINTERPRETATION\nThe findings confirm that physical and sexual partner violence against women is widespread. The variation in prevalence within and between settings highlights that this violence in not inevitable, and must be addressed.", "title": "" }, { "docid": "cb49d71778f873d2f21df73b9e781c8e", "text": "Many people with mental health problems do not use mental health care, resulting in poorer clinical and social outcomes. Reasons for low service use rates are still incompletely understood. In this longitudinal, population-based study, we investigated the influence of mental health literacy, attitudes toward mental health services, and perceived need for treatment at baseline on actual service use during a 6-month follow-up period, controlling for sociodemographic variables, symptom level, and a history of lifetime mental health service use. Positive attitudes to mental health care, higher mental health literacy, and more perceived need at baseline significantly predicted use of psychotherapy during the follow-up period. Greater perceived need for treatment and better literacy at baseline were predictive of taking psychiatric medication during the following 6 months. Our findings suggest that mental health literacy, attitudes to treatment, and perceived need may be targets for interventions to increase mental health service use.", "title": "" }, { "docid": "b5f7511566b902bc206228dc3214c211", "text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.", "title": "" }, { "docid": "a15ba068638d0df0bd1a501dde97a67e", "text": "Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms (NIAs) as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of ‘‘what we ought to do,’’ I examine ethical dimensions of contemporary NIAs. Specifically, in an effort to sketch an empirically grounded, pragmatic ethics of algorithms, I trace an algorithmic assemblage’s power to convene constituents, suggest actions based on perceived similarity and probability, and govern the timing and timeframes of ethical action. University of Southern California, Los Angeles, CA, USA Corresponding Author: Mike Ananny, University of Southern California, 3502 Watt Way, Los Angeles, CA 90089, USA. Email: ananny@usc.edu Science, Technology, & Human Values 1-25 a The Author(s) 2015 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0162243915606523 sthv.sagepub.com", "title": "" } ]
scidocsrr
1a4109d23dffd67c388f61dfd7df6a46
Learning to rank relational objects and its application to web search
[ { "docid": "04ba17b4fc6b506ee236ba501d6cb0cf", "text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.", "title": "" } ]
[ { "docid": "92d3c81c7be8ed591019edf2949015a4", "text": "With the increasing popularity of Bitcoin, a digital decentralized currency and payment system, the number of malicious third parties attempting to steal bitcoins has grown substantially. Attackers have stolen bitcoins worth millions of dollars from victims by using malware to gain access to the private keys stored on the victims’ computers or smart phones. In order to protect the Bitcoin private keys, we propose the use of a hardware token for the authorization of transactions. We created a proof-of-concept Bitcoin hardware token: BlueWallet. The device communicates using Bluetooth Low Energy and is able to securely sign Bitcoin transactions. The device can also be used as an electronic wallet in combination with a point of sale and serves as an alternative to cash and credit cards.", "title": "" }, { "docid": "bb0b9b679444291bceecd68153f6f480", "text": "Path planning is one of the most significant and challenging subjects in robot control field. In this paper, a path planning method based on an improved shuffled frog leaping algorithm is proposed. In the proposed approach, a novel updating mechanism based on the median strategy is used to avoid local optimal solution problem in the general shuffled frog leaping algorithm. Furthermore, the fitness function is modified to make the path generated by the shuffled frog leaping algorithm smoother. In each iteration, the globally best frog is obtained and its position is used to lead the movement of the robot. Finally, some simulation experiments are carried out. The experimental results show the feasibility and effectiveness of the proposed algorithm in path planning for mobile robots.", "title": "" }, { "docid": "d4cf614c352b3bbef18d7f219a3da2d1", "text": "In recent years there has been growing interest on the occurrence and the fate of pharmaceuticals in the aquatic environment. Nevertheless, few data are available covering the fate of the pharmaceuticals in the water/sediment compartment. In this study, the environmental fate of 10 selected pharmaceuticals and pharmaceutical metabolites was investigated in water/sediment systems including both the analysis of water and sediment. The experiments covered the application of four 14C-labeled pharmaceuticals (diazepam, ibuprofen, iopromide, and paracetamol) for which radio-TLC analysis was used as well as six nonlabeled compounds (carbamazepine, clofibric acid, 10,11-dihydro-10,11-dihydroxycarbamazepine, 2-hydroxyibuprofen, ivermectin, and oxazepam), which were analyzed via LC-tandem MS. Ibuprofen, 2-hydroxyibuprofen, and paracetamol displayed a low persistence with DT50 values in the water/sediment system < or =20 d. The sediment played a key role in the elimination of paracetamol due to the rapid and extensive formation of bound residues. A moderate persistence was found for ivermectin and oxazepam with DT50 values of 15 and 54 d, respectively. Lopromide, for which no corresponding DT50 values could be calculated, also exhibited a moderate persistence and was transformed into at least four transformation products. For diazepam, carbamazepine, 10,11-dihydro-10,11-dihydroxycarbamazepine, and clofibric acid, system DT90 values of >365 d were found, which exhibit their high persistence in the water/sediment system. An elevated level of sorption onto the sediment was observed for ivermectin, diazepam, oxazepam, and carbamazepine. Respective Koc values calculated from the experimental data ranged from 1172 L x kg(-1) for ivermectin down to 83 L x kg(-1) for carbamazepine.", "title": "" }, { "docid": "0a432546553ffbb06690495d5c858e19", "text": "Since the first reported death in 1977, scores of seemingly healthy Hmong refugees have died mysteriously and without warning from what has come to be known as Sudden Unexpected Nocturnal Death Syndrome (SUNDS). To date medical research has provided no adequate explanation for these sudden deaths. This study is an investigation into the changing impact of traditional beliefs as they manifest during the stress of traumatic relocation. In Stockton, California, 118 Hmong men and women were interviewed regarding their awareness of and personal experience with a traditional nocturnal spirit encounter. An analysis of this data reveals that the supranormal attack acts as a trigger for Hmong SUNDS.", "title": "" }, { "docid": "49a6de5759f4e760f68939e9292928d8", "text": "An ongoing controversy exists in the prototyping community about how closely in form and function a user-interface prototype should represent the final product. This dispute is referred to as the \" Low-versus High-Fidelity Prototyping Debate.'' In this article, we discuss arguments for and against low-and high-fidelity prototypes , guidelines for the use of rapid user-interface proto-typing, and the implications for user-interface designers.", "title": "" }, { "docid": "f2e94643b8896614c3538e7b694b2253", "text": "Training and adaption of employees are time and money consuming. Employees’ turnover can be predicted by their organizational and personal historical data in order to reduce probable loss of organizations. Prediction methods are highly related to human resource management to obtain patterns by historical data. This article implements knowledge discovery steps on real data of a manufacturing plant. We consider many characteristics of employees such as age, technical skills and work experience. Different data mining methods are compared based on their accuracy, calculation time and user friendliness. Furthermore the importance of data features is measured by Pearson ChiSquare test. In order to reach the desired user friendliness, a graphical user interface is designed specifically for the case study to handle knowledge discovery life cycle.", "title": "" }, { "docid": "ba94bfaa5dc669877deedfaee057c93d", "text": "Bayesian networks have become a widely used method in the modelling of uncertain knowledge. Owing to the difficulty domain experts have in specifying them, techniques that learn Bayesian networks from data have become indispensable. Recently, however, there have been many important new developments in this field. This work takes a broad look at the literature on learning Bayesian networks—in particular their structure—from data. Specific topics are not focused on in detail, but it is hoped that all the major fields in the area are covered. This article is not intended to be a tutorial—for this, there are many books on the topic, which will be presented. However, an effort has been made to locate all the relevant publications, so that this paper can be used as a ready reference to find the works on particular sub-topics.", "title": "" }, { "docid": "6aaabe17947bc455d940047745ed7962", "text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.", "title": "" }, { "docid": "c6a17677f0020c9f530a3d4236665b64", "text": "In medicine, visualizing chromosomes is important for medical diagnostics, drug development, and biomedical research. Unfortunately, chromosomes often overlap and it is necessary to identify and distinguish between the overlapping chromosomes. A segmentation solution that is fast and automated will enable scaling of cost effective medicine and biomedical research. We apply neural network-based image segmentation to the problem of distinguishing between partially overlapping DNA chromosomes. A convolutional neural network is customized for this problem. The results achieved intersection over union (IOU) scores of 94.7% for the overlapping region and 88-94% on the non-overlapping chromosome regions.", "title": "" }, { "docid": "868c0627cc309c8029fa0edc7f9d24b3", "text": "Aspect-based opinion mining is widely applied to review data to aggregate or summarize opinions of a product, and the current state-of-the-art is achieved with Latent Dirichlet Allocation (LDA)-based model. Although social media data like tweets are laden with opinions, their \"dirty\" nature (as natural language) has discouraged researchers from applying LDA-based opinion model for product review mining. Tweets are often informal, unstructured and lacking labeled data such as categories and ratings, making it challenging for product opinion mining. In this paper, we propose an LDA-based opinion model named Twitter Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM leverages hashtags, mentions, emoticons and strong sentiment words that are present in tweets in its discovery process. It improves opinion prediction by modeling the target-opinion interaction directly, thus discovering target specific opinion words, neglected in existing approaches. Moreover, we propose a new formulation of incorporating sentiment prior information into a topic model, by utilizing an existing public sentiment lexicon. This is novel in that it learns and updates with the data. We conduct experiments on 9 million tweets on electronic products, and demonstrate the improved performance of TOTM in both quantitative evaluations and qualitative analysis. We show that aspect-based opinion analysis on massive volume of tweets provides useful opinions on products.", "title": "" }, { "docid": "fceb43462f77cf858ef9747c1c5f0728", "text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.", "title": "" }, { "docid": "381509845636d016eb716540980cb291", "text": "Germinal centers (GCs) are the site of antibody diversification and affinity maturation and as such are vitally important for humoral immunity. The study of GC biology has undergone a renaissance in the past 10 years, with a succession of findings that have transformed our understanding of the cellular dynamics of affinity maturation. In this review, we discuss recent developments in the field, with special emphasis on how GC cellular and clonal dynamics shape antibody affinity and diversity during the immune response.", "title": "" }, { "docid": "a0fb601da8e6b79d4a876730cfee4271", "text": "Social media platforms provide an inexpensive communication medium that allows anyone to publish content and anyone interested in the content can obtain it. However, this same potential of social media provide space for discourses that are harmful to certain groups of people. Examples of these discourses include bullying, offensive content, and hate speech. Out of these discourses hate speech is rapidly recognized as a serious problem by authorities of many countries. In this paper, we provide the first of a kind systematic large-scale measurement and analysis study of explicit expressions of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech, the sensitivity of hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.", "title": "" }, { "docid": "494b375064fbbe012b382d0ad2db2900", "text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?", "title": "" }, { "docid": "a8478fa2a7088c270f1b3370bb06d862", "text": "Sodium-ion batteries (SIBs) are prospective alternative to lithium-ion batteries for large-scale energy-storage applications, owing to the abundant resources of sodium. Metal sulfides are deemed to be promising anode materials for SIBs due to their low-cost and eco-friendliness. Herein, for the first time, series of copper sulfides (Cu2S, Cu7S4, and Cu7KS4) are controllably synthesized via a facile electrochemical route in KCl-NaCl-Na2S molten salts. The as-prepared Cu2S with micron-sized flakes structure is first investigated as anode of SIBs, which delivers a capacity of 430 mAh g-1 with a high initial Coulombic efficiency of 84.9% at a current density of 100 mA g-1. Moreover, the Cu2S anode demonstrates superior capability (337 mAh g-1 at 20 A g-1, corresponding to 50 C) and ultralong cycle performance (88.2% of capacity retention after 5000 cycles at 5 A g-1, corresponding to 0.0024% of fade rate per cycle). Meanwhile, the pseudocapacitance contribution and robust porous structure in situ formed during cycling endow the Cu2S anodes with outstanding rate capability and enhanced cyclic performance, which are revealed by kinetics analysis and ex situ characterization.", "title": "" }, { "docid": "922e4d742d4fc800ac7e212dda92c7a9", "text": "Maintaining the stability of tracks on multiple targets in video over extended time periods remains a challenging problem. A few methods which have recently shown encouraging results in this direction rely on learning context models or the availability of training data. However, this may not be feasible in many application scenarios. Moreover, tracking methods should be able to work across different scenarios (e.g. multiple resolutions of the video) making such context models hard to obtain. In this paper, we consider the problem of long-term tracking in video in application domains where context information is not available a priori, nor can it be learned online. We build our solution on the hypothesis that most existing trackers can obtain reasonable short-term tracks (tracklets). By analyzing the statistical properties of these tracklets, we develop associations between them so as to come up with longer tracks. This is achieved through a stochastic graph evolution step that considers the statistical properties of individual tracklets, as well as the statistics of the targets along each proposed long-term track. On multiple real-life video sequences spanning low and high resolution data, we show the ability to accurately track over extended time periods (results are shown on many minutes of continuous video).", "title": "" }, { "docid": "732d6bd47a4ab7b77d1c192315a1577c", "text": "In this paper, we address the problem of classifying image sets, each of which contains images belonging to the same class but covering large variations in, for instance, viewpoint and illumination. We innovatively formulate the problem as the computation of Manifold-Manifold Distance (MMD), i.e., calculating the distance between nonlinear manifolds each representing one image set. To compute MMD, we also propose a novel manifold learning approach, which expresses a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrating the distances between pair of subspaces respectively from one of the involved manifolds. The proposed MMD method is evaluated on the task of Face Recognition based on Image Set (FRIS). In FRIS, each known subject is enrolled with a set of facial images and modeled as a gallery manifold, while a testing subject is modeled as a probe manifold, which is then matched against all the gallery manifolds by MMD. Identification is achieved by seeking the minimum MMD. Experimental results on two public face databases, Honda/UCSD and CMU MoBo, demonstrate that the proposed MMD method outperforms the competing methods.", "title": "" }, { "docid": "080a7cd58682a156bcddcaad2031fe14", "text": "In this paper, we present new models and algorithms for object-level video advertising. A framework that aims to embed content-relevant ads within a video stream is investigated in this context. First, a comprehensive optimization model is designed to minimize intrusiveness to viewers when ads are inserted in a video. For human clothing advertising, we design a deep convolutional neural network using face features to recognize human genders in a video stream. Human parts alignment is then implemented to extract human part features that are used for clothing retrieval. Second, we develop a heuristic algorithm to solve the proposed optimization problem. For comparison, we also employ the genetic algorithm to find solutions approaching the global optimum. Our novel framework is examined in various types of videos. Experimental results demonstrate the effectiveness of the proposed method for object-level video advertising.", "title": "" }, { "docid": "c3f1a534afe9f5c48aac88812a51ab09", "text": "We propose a novel method MultiModal Pseudo Relevance Feedback (MMPRF) for event search in video, which requires no search examples from the user. Pseudo Relevance Feedback has shown great potential in retrieval tasks, but previous works are limited to unimodal tasks with only a single ranked list. To tackle the event search task which is inherently multimodal, our proposed MMPRF takes advantage of multiple modalities and multiple ranked lists to enhance event search performance in a principled way. The approach is unique in that it leverages not only semantic features, but also non-semantic low-level features for event search in the absence of training data. Evaluated on the TRECVID MEDTest dataset, the approach improves the baseline by up to 158% in terms of the mean average precision. It also significantly contributes to CMU Team's final submission in TRECVID-13 Multimedia Event Detection.", "title": "" }, { "docid": "038064c2998a5da8664be1ba493a0326", "text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.", "title": "" } ]
scidocsrr
3c5bbae9d08b579af73c14f6ecd274da
An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems
[ { "docid": "2871de581ee0efe242438567ca3a57dd", "text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.", "title": "" } ]
[ { "docid": "8a3b72d495b7352f6690a7323ab29286", "text": "Security Enhanced Linux (SELinux) is a widely used Mandatory Access Control system which is integrated in the Linux kernel. It is an added layer of security mechanism on top of the standard Discretionary Access Control system that Unix/Linux and other major operating systems have. SELinux does not nullify DAC but in fact supports DAC and its checks are performed after DAC's. If DAC allows an operation then SELinux checks that operation by comparing it with the set of specified rules that it has and decides based on those rules only. If DAC denies some access then SELinux checks are not performed. Because DAC allows users to have full control over files that they own, they could unwantedly set any permission on the files that they own, at their own discretion, which could prove dangerous so for this reason SELinux brings the Mandatory Access Controls (MAC) mechanism which enforces rules based on a specified policy and denies access operations if policy in use do not allow it, even if the file permissions were world-accessible using DAC In this paper we discuss various SELinux policies and provide a statistical comparison using standard Delphi method.", "title": "" }, { "docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c", "text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.", "title": "" }, { "docid": "3e8bffdcf0df0a34b95ecc5432984777", "text": "We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., \"largest elephant standing behind baby elephant\". This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context - visual attributes (e.g., \"largest\", \"baby\") and relationships (e.g., \"behind\") that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Our model exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings. The code is available at https://github.com/yuleiniu/vc/.", "title": "" }, { "docid": "741a897b87cc76d68f5400974eee6b32", "text": "Numerous techniques exist to augment the security functionality of Commercial O -The-Shelf (COTS) applications and operating systems, making them more suitable for use in mission-critical systems. Although individually useful, as a group these techniques present di culties to system developers because they are not based on a common framework which might simplify integration and promote portability and reuse. This paper presents techniques for developing Generic Software Wrappers { protected, non-bypassable kernel-resident software extensions for augmenting security without modi cation of COTS source. We describe the key elements of our work: our high-level Wrapper De nition Language (WDL), and our framework for con guring, activating, and managing wrappers. We also discuss code reuse, automatic management of extensions, a framework for system-building through composition, platform-independence, and our experiences with our Solaris and FreeBSD prototypes.", "title": "" }, { "docid": "443191f41aba37614c895ba3533f80ed", "text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.", "title": "" }, { "docid": "1b7fb04cd80a016ddd53d8481f6da8bd", "text": "The classification of retinal vessels into artery/vein (A/V) is an important phase for automating the detection of vascular changes, and for the calculation of characteristic signs associated with several systemic diseases such as diabetes, hypertension, and other cardiovascular conditions. This paper presents an automatic approach for A/V classification based on the analysis of a graph extracted from the retinal vasculature. The proposed method classifies the entire vascular tree deciding on the type of each intersection point (graph nodes) and assigning one of two labels to each vessel segment (graph links). Final classification of a vessel segment as A/V is performed through the combination of the graph-based labeling results with a set of intensity features. The results of this proposed method are compared with manual labeling for three public databases. Accuracy values of 88.3%, 87.4%, and 89.8% are obtained for the images of the INSPIRE-AVR, DRIVE, and VICAVR databases, respectively. These results demonstrate that our method outperforms recent approaches for A/V classification.", "title": "" }, { "docid": "c89b903e497ebe8e8d89e8d1d931fae1", "text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c913313524862f21df94651f78616e09", "text": "The solidity is one of the most important factors which greatly affects the performance of the straight-bladed vertical axis wind turbine (SB-VAWT). In this study, numerical computations were carried out on a small model of the SB-VAWT with different solidities to invest its performance effects. Two kinds of solidity were decided, and for each one, three patterns were selected by changing the blade chord and number. Numerical computations based on the 2 dimensions incompressible steady flow were made. Flow fields around the SB-VAWT were obtained, and the torque and power coefficients were also calculated. According to the computation results under the conditions of this study, the effects of solidity on both the static and dynamic performance of the SB-VAWT were discussed. Keywords-vertical axis wind turbine;straight-bladed; numerical computation; solidity; stactic torque;power", "title": "" }, { "docid": "b0741999659724f8fa5dc1117ec86f0d", "text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.", "title": "" }, { "docid": "574e0006bffb310bf64417a607adccdf", "text": "We design differentially private learning algorithms that are agnostic to the learning model assuming access to a limited amount of unlabeled public data. First, we provide a new differentially private algorithm for answering a sequence of m online classification queries (given by a sequence of m unlabeled public feature vectors) based on a private training set. Our algorithm follows the paradigm of subsample-and-aggregate, in which any generic non-private learner is trained on disjoint subsets of the private training set, and then for each classification query, the votes of the resulting classifiers ensemble are aggregated in a differentially private fashion. Our private aggregation is based on a novel combination of the distance-to-instability framework [26], and the sparse-vector technique [15, 18]. We show that our algorithm makes a conservative use of the privacy budget. In particular, if the underlying non-private learner yields a classification error of at most α ∈ (0, 1), then our construction answers more queries, by at least a factor of 1/α in some cases, than what is implied by a straightforward application of the advanced composition theorem for differential privacy. Next, we apply the knowledge transfer technique to construct a private learner that outputs a classifier, which can be used to answer an unlimited number of queries. In the PAC model, we analyze our construction and prove upper bounds on the sample complexity for both the realizable and the non-realizable cases. Similar to non-private sample complexity, our bounds are completely characterized by the VC dimension of the concept class.", "title": "" }, { "docid": "e6640dc272e4142a2ddad8291cfaead7", "text": "We give a summary of R. Borcherds’ solution (with some modifications) to the following part of the Conway-Norton conjectures: Given the Monster M and Frenkel-Lepowsky-Meurman’s moonshine module V ♮, prove the equality between the graded characters of the elements of M acting on V ♮ (i.e., the McKay-Thompson series for V ♮) and the modular functions provided by Conway and Norton. The equality is established using the homology of a certain subalgebra of the monster Lie algebra, and the Euler-Poincaré identity.", "title": "" }, { "docid": "001d2da1fbdaf2c49311f6e68b245076", "text": "Lack of physical activity is a serious health concern for individuals who are visually impaired as they have fewer opportunities and incentives to engage in physical activities that provide the amounts and kinds of stimulation sufficient to maintain adequate fitness and to support a healthy standard of living. Exergames are video games that use physical activity as input and which have the potential to change sedentary lifestyles and associated health problems such as obesity. We identify that exergames have a number properties that could overcome the barriers to physical activity that individuals with visual impairments face. However, exergames rely upon being able to perceive visual cues that indicate to the player what input to provide. This paper presents VI Tennis, a modified version of a popular motion sensing exergame that explores the use of vibrotactile and audio cues. The effectiveness of providing multimodal (tactile/audio) versus unimodal (audio) cues was evaluated with a user study with 13 children who are blind. Children achieved moderate to vigorous levels of physical activity- the amount required to yield health benefits. No significant difference in active energy expenditure was found between both versions, though children scored significantly better with the tactile/audio version and also enjoyed playing this version more, which emphasizes the potential of tactile/audio feedback for engaging players for longer periods of time.", "title": "" }, { "docid": "3b8f2694d8b6f7177efa8716d72b9129", "text": "Behara, B and Jacobson, BH. Acute effects of deep tissue foam rolling and dynamic stretching on muscular strength, power, and flexibility in Division I linemen. J Strength Cond Res 31(4): 888-892, 2017-A recent strategy to increase sports performance is a self-massage technique called myofascial release using foam rollers. Myofascial restrictions are believed to be brought on by injuries, muscle imbalances, overrecruitment, and/or inflammation, all of which can decrease sports performance. The purpose of this study was to compare the acute effects of a single-bout of lower extremity self-myofascial release using a custom deep tissue roller (DTR) and a dynamic stretch protocol. Subjects consisted of NCAA Division 1 offensive linemen (n = 14) at a Midwestern university. All players were briefed on the objectives of the study and subsequently signed an approved IRB consent document. A randomized crossover design was used to assess each dependent variable (vertical jump [VJ] power and velocity, knee isometric torque, and hip range of motion was assessed before and after: [a] no treatment, [b] deep tissue foam rolling, and [c] dynamic stretching). Results of repeated-measures analysis of variance yielded no pretest to posttest significant differences (p > 0.05) among the groups for VJ peak power (p = 0.45), VJ average power (p = 0.16), VJ peak velocity (p = 0.25), VJ average velocity (p = 0.23), peak knee extension torque (p = 0.63), average knee extension torque (p = 0.11), peak knee flexion torque (p = 0.63), or average knee flexion torque (p = 0.22). However, hip flexibility was statistically significant when tested after both dynamic stretching and foam rolling (p = 0.0001). Although no changes in strength or power was evident, increased flexibility after DTR may be used interchangeably with traditional stretching exercises.", "title": "" }, { "docid": "bf0531b03cc36a69aca1956b21243dc6", "text": "Sound of their breath fades with the light. I think about the loveless fascination, Under the milky way tonight. Lower the curtain down in memphis, Lower the curtain down all right. I got no time for private consultation, Under the milky way tonight. Wish I knew what you were looking for. Might have known what you would find. And it's something quite peculiar, Something thats shimmering and white. It leads you here despite your destination, Under the milky way tonight (chorus) Preface This Master's Thesis concludes my studies in Human Aspects of Information Technology (HAIT) at Tilburg University. It describes the development, implementation, and analysis of an automatic mood classifier for music. I would like to thank those who have contributed to and supported the contents of the thesis. Special thanks goes to my supervisor Menno van Zaanen for his dedication and support during the entire process of getting started up to the final results. Moreover, I would like to express my appreciation to Fredrik Mjelle for providing the user-tagged instances exported out of the MOODY database, which was used as the dataset for the experiments. Furthermore, I would like to thank Toine Bogers for pointing me out useful website links regarding music mood classification and sending me papers with citations and references. I would also like to thank Michael Voong for sending me his papers on music mood classification research, Jaap van den Herik for his support and structuring of my writing and thinking. I would like to recognise Eric Postma and Marieke van Erp for their time assessing the thesis as members of the examination committee. Finally, I would like to express my gratitude to my family for their enduring support. Abstract This research presents the outcomes of research into using the lingual part of music for building an automatic mood classification system. Using a database consisting of extracted lyrics and user-tagged mood attachments, we built a classifier based on machine learning techniques. By testing the classification system on various mood frameworks (or dimensions) we examined to what extent it is possible to attach mood tags automatically to songs based on lyrics only. Furthermore, we examined to what extent the linguistic part of music revealed adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that the use of term frequencies and tf*idf values provide a valuable source of …", "title": "" }, { "docid": "e201c682e1e048b92a60ade663aa7112", "text": "In this paper, we study the problem of landmark recognition and propose to leverage 3D visual phrases to improve the performance. A 3D visual phrase is a triangular facet on the surface of a reconstructed 3D landmark model. In contrast to existing 2D visual phrases which are mainly based on co-occurrence statistics in 2D image planes, such 3D visual phrases explicitly characterize the spatial structure of a 3D object (landmark), and are highly robust to projective transformations due to viewpoint changes. We present an effective solution to discover, describe, and detect 3D visual phrases. The experiments on 10 landmarks have achieved promising results, which demonstrate that our approach provides a good balance between precision and recall of landmark recognition while reducing the dependence on post-verification to reject false positives.", "title": "" }, { "docid": "1d53b01ee1a721895a17b7d0f3535a28", "text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.", "title": "" }, { "docid": "ad059332e36849857c9bf1a52d5b0255", "text": "Interaction Design Beyond Human Computer Interaction instructions guide, service manual guide and maintenance manual guide for the products. Before employing this manual, service or maintenance guide you should know detail regarding your products cause this manual for expert only. We hope ford alternator wiring diagram internal regulator and yet another manual of these lists a good choice for your to repair, fix and solve your product or service or device problems don't try an oversight.", "title": "" }, { "docid": "3673e0f738cf6fd1cc7c94650e827273", "text": "An important question when eliciting opinions from experts is how to aggregate the reported opinions. In this paper, we propose a pooling method to aggregate expert opinions. Intuitively, it works as if the experts were continuously updating their opinions in order to accommodate the expertise of others. Each updated opinion takes the form of a linear opinion pool, where the weight that an expert assigns to a peer’s opinion is inversely related to the distance between their opinions. In other words, experts are assumed to prefer opinions that are close to their own opinions. We prove that such an updating process leads to consensus, i.e., the experts all converge towards the same opinion. Further, we show that if rational experts are rewarded using the quadratic scoring rule, then the assumption that they prefer opinions that are close to their own opinions follows naturally. We empirically demonstrate the efficacy of the proposed method using real-world data.", "title": "" }, { "docid": "2f4325291ec4d705ed2fe19e57d4db36", "text": "Reliable precision grasping for unknown objects is a prerequisite for robots that work in the field of logistics, manufacturing and household tasks. The nature of this task requires a simultaneous solution of a mixture of sub-problems. These include estimating object properties, finding viable grasps and executing grasps without displacement. We propose to explicitly take perceptual uncertainty into account during grasp execution. The underlying object representation is a probabilistic signed distance field, which includes both signed distances to the surface and spatially interpretable variances. Based on this representation, we propose a two-stage grasp generation method, which is specifically designed for generating precision grasps. In order to evaluate the whole approach, we perform extensive real world grasping experiments on a set of hard-to-grasp objects. Our approach achieves 78% success rate and shows robustness to the placement orientation.", "title": "" }, { "docid": "12363d704fcfe9fef767c5e27140c214", "text": "The application range of UAVs (unmanned aerial vehicles) is expanding along with performance upgrades. Vertical take-off and landing (VTOL) aircraft has the merits of both fixed-wing and rotary-wing aircraft. Tail-sitting is the simplest way for the VTOL maneuver since it does not need extra actuators. However, conventional hovering control for a tail-sitter UAV is not robust enough against large disturbance such as a blast of wind, a bird strike, and so on. It is experimentally observed that the conventional quaternion feedback hovering control often fails to keep stability when the control compensates large attitude errors. This paper proposes a novel hovering control strategy for a tail-sitter VTOL UAV that increases stability against large disturbance. In order to verify the proposed hovering control strategy, simulations and experiments on hovering of the UAV are performed giving large attitude errors. The results show that the proposed control strategy successfully compensates initial large attitude errors keeping stability, while the conventional quaternion feedback controller fails.", "title": "" } ]
scidocsrr
51147a318341b36fad9d091ee252ecf1
Who Leads the Clothing Fashion: Style, Color, or Texture? A Computational Study
[ { "docid": "e77dc44a5b42d513bdbf4972d62a74f9", "text": "Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.", "title": "" }, { "docid": "b17fdc300edc22ab855d4c29588731b2", "text": "Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.", "title": "" } ]
[ { "docid": "0fa55762a86f658aa2936cd63f2db838", "text": "Mindfulness has received considerable attention as a correlate of psychological well-being and potential mechanism for the success of mindfulness-based interventions (MBIs). Despite a common emphasis of mindfulness, at least in name, among MBIs, mindfulness proves difficult to assess, warranting consideration of other common components. Self-compassion, an important construct that relates to many of the theoretical and practical components of MBIs, may be an important predictor of psychological health. The present study compared ability of the Self-Compassion Scale (SCS) and the Mindful Attention Awareness Scale (MAAS) to predict anxiety, depression, worry, and quality of life in a large community sample seeking self-help for anxious distress (N = 504). Multivariate and univariate analyses showed that self-compassion is a robust predictor of symptom severity and quality of life, accounting for as much as ten times more unique variance in the dependent variables than mindfulness. Of particular predictive utility are the self-judgment and isolation subscales of the SCS. These findings suggest that self-compassion is a robust and important predictor of psychological health that may be an important component of MBIs for anxiety and depression.", "title": "" }, { "docid": "98c64622f9a22f89e3f9dd77c236f310", "text": "After a development process of many months, the TLS 1.3 specification is nearly complete. To prevent past mistakes, this crucial security protocol must be thoroughly scrutinised prior to deployment. In this work we model and analyse revision 10 of the TLS 1.3 specification using the Tamarin prover, a tool for the automated analysis of security protocols. We specify and analyse the interaction of various handshake modes for an unbounded number of concurrent TLS sessions. We show that revision 10 meets the goals of authenticated key exchange in both the unilateral and mutual authentication cases. We extend our model to incorporate the desired delayed client authentication mechanism, a feature that is likely to be included in the next revision of the specification, and uncover a potential attack in which an adversary is able to successfully impersonate a client during a PSK-resumption handshake. This observation was reported to, and confirmed by, the IETF TLS Working Group. Our work not only provides the first supporting evidence for the security of several complex protocol mode interactions in TLS 1.3, but also shows the strict necessity of recent suggestions to include more information in the protocol's signature contents.", "title": "" }, { "docid": "ff04301675ffa651e9cbdfbb9c6ab75d", "text": "It is challenging to detect and track the ball from the broadcast soccer video. The feature-based tracking methods to judge if a sole object is a target are inadequate because the features of the balls change fast over frames and we cannot differ the ball from other objects by them. This paper proposes a new framework to find the ball position by creating and analyzing the trajectory. The ball trajectory is obtained from the candidate collection by use of the heuristic false candidate reduction, the Kalman filterbased trajectory mining, and the trajectory evaluation. The ball trajectory is extended via a localized Kalman filter-based model matching procedure. The experimental results on two consecutive 1000-frame sequences illustrate that the proposed framework is very effective and can obtain a very high accuracy that is much better than existing methods.", "title": "" }, { "docid": "f4859226e52f7c9d2b2dc4ac8a0255de", "text": "Imbalanced data learning is one of the challenging problems in data mining; among this matter, founding the right model assessment measures is almost a primary research issue. Skewed class distribution causes a misreading of common evaluation measures as well it lead a biased classification. This article presents a set of alternative for imbalanced data learning assessment, using a combined measures (G-means, likelihood ratios, Discriminant power, F-Measure Balanced Accuracy, Youden index, Matthews correlation coefficient), and graphical performance assessment (ROC curve, Area Under Curve, Partial AUC, Weighted AUC, Cumulative Gains Curve and lift chart, Area Under Lift AUL), that aim to provide a more credible evaluation. We analyze the applications of these measures in churn prediction models evaluation, a well known application of imbalanced data", "title": "" }, { "docid": "f941c1f5e5acd9865e210b738ff1745a", "text": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.", "title": "" }, { "docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02", "text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.", "title": "" }, { "docid": "e727b64ba45852732f836808ff330940", "text": "Deep learning researches on the transformation problems for image and text have raised great attention. However, present methods for music feature transfer using neural networks are far from practical application. In this paper, we initiate a novel system for transferring the texture of music, and release it as an open source project. Its core algorithm is composed of a converter which represents sounds as texture spectra, a corresponding reconstructor and a feed-forward transfer network. We evaluate this system from multiple perspectives, and experimental results reveal that it achieves convincing results in both sound effects and computational performance.", "title": "" }, { "docid": "b568dae2d11ca8c28c0b7268368ce53d", "text": "The Box and Block Test, a test of manual dexterity, has been used by occupational therapists and others to evaluate physically handicapped individuals. Because the test lacked normative data for adults, the results of the test have been interpreted subjectively. The purpose of this study was to develop normative data for adults. Test subjects were 628 Normal adults (310 males and 318 females)from the seven-county Milwaukee area. Data on males and females 20 to 94 years old were divided into 12 age groups. Means, standard deviations, standard error, and low and high scores are reported for each five-year age group. These data will enable clinicians to objectively compare a patient's score to a normal population parameter. Occupational therapists are frequently involved with increasing the manual dexterity of their patients. Often, these patients are unable to perform tests offine manual or finger dexterity, such as the Purdue Pegboard Test or the Crawford Small Parts Dexterity Test. Tests of manual dexterity, such as the Minnesota Rate of Manipulation Test, have limited clinical application because a) they require lengthy administration time, b) a standardized standing position must be used for testing, and c) the tests use normative samples that poorly represent the wide range of clinical patients. Because of the limitations of such standardized tests, therapists often evaluate dexterity subjectively. The Box and Block Test has been suggested as a measure of gross manual dexterity (1) and as a prevocational test for handicapped people (2). Norms have been collected on adults with neuromuscular involvement (2) and on normal children (7, 8, and 9 years old) (3). Standardized instructions along with reliability and validity data, are reported in the literature (2,3), but there are no norms for the normal adult population. Therefore, the purpose of this study was to collect normative data for adults. Methods", "title": "" }, { "docid": "9f3966e64089594b261e1cd9dca8eef1", "text": "We examine how control over a technology platform can increase profits and innovation. By choosing how much to open and when to bundle enhancements, platform sponsors can influence choices of ecosystem partners. Platform openness invites developer participation but sacrifices direct sales. Bundling enhancements early drives developers away but bundling late delays platform growth. Ironically, developers can prefer sponsored platforms to unmanaged open standards despite giving up their applications. Results can inform antitrust law and innovation strategy.", "title": "" }, { "docid": "609bc0aa7dcd9ffc97e753642bec8c82", "text": "Current trends in energy power generation are leading efforts related to the development of more reliable, sustainable sources and technologies for energy harvesting. Solar energy is one of these renewable energy resources, widely available in nature. Most of the solar panels used today to convert solar energy into chemical energy, and then to electrical energy, are stationary. Energy efficiency studies have shown that more electrical energy can be retrieved from solar panels if they are organized in arrays and then placed on a solar tracker that can then follow the sun as it moves during the day from east to west, and as it moves from north to south during the year, as seasons change. Adding more solar panels to solar tracker structures will improve its yield. It would also add more challenges when it comes to managing the overall weight of such structures, and their strength and reliability under different weather conditions, such as wind, changes in temperature, and atmospheric conditions. Hence, careful structural design and simulation is needed to establish the most optimal parameters in order for solar trackers to withstand all environmental conditions and to function with a high reliability for long periods of time.", "title": "" }, { "docid": "3c79b81af0d84dcbfebb2108f3078dc4", "text": "This paper reviews the available literature on computational modelling in two areas of bone biomechanics: fracture and healing. Bone is a complex material, with a multiphasic, heterogeneous and anisotropic microstructure. The processes of fracture and healing can only be understood in terms of the underlying bone structure and its mechanical role. Bone fracture analysis attempts to predict the failure of musculoskeletal structures by several possible mechanisms under different loading conditions. However, as opposed to structurally inert materials, bone is a living tissue that can repair itself. An exciting new field of research is being developed to better comprehend these mechanisms and the mechanical behaviour of bone tissue. One of the main goals of this work is to demonstrate, after a review of computational models, the main similarities and differences between normal engineering materials and bone tissue from a structural point of view. We also underline the importance of computational simulations in biomechanics due to the difficulty of obtaining experimental or clinical results. 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "65fd482ac37852214fc82b4bc05c6f72", "text": "This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30% AUC.", "title": "" }, { "docid": "efd79ed4f8fba97f0ee4a2774f40da6a", "text": "This paper presents a new algorithm for the extrinsic calibration of a perspective camera and an invisible 2D laser-rangefinder (LRF). The calibration is achieved by freely moving a checkerboard pattern in order to obtain plane poses in camera coordinates and depth readings in the LRF reference frame. The problem of estimating the rigid displacement between the two sensors is formulated as one of registering a set of planes and lines in the 3D space. It is proven for the first time that the alignment of three plane-line correspondences has at most eight solutions that can be determined by solving a standard p3p problem and a linear system of equations. This leads to a minimal closed-form solution for the extrinsic calibration that can be used as hypothesis generator in a RANSAC paradigm. Our calibration approach is validated through simulation and real experiments that show the superiority with respect to the current state-of-the-art method requiring a minimum of five input planes.", "title": "" }, { "docid": "fb898ef1b13d68ca3b5973b77237de74", "text": "We present a nonrigid alignment algorithm for aligning high-resolution range data in the presence of low-frequency deformations, such as those caused by scanner calibration error. Traditional iterative closest points (ICP) algorithms, which rely on rigid-body alignment, fail in these cases because the error appears as a nonrigid warp in the data. Our algorithm combines the robustness and efficiency of ICP with the expressiveness of thin-plate splines to align high-resolution scanned data accurately, such as scans from the Digital Michelangelo Project [M. Levoy et al. (2000)]. This application is distinguished from previous uses of the thin-plate spline by the fact that the resolution and size of warping are several orders of magnitude smaller than the extent of the mesh, thus requiring especially precise feature correspondence.", "title": "" }, { "docid": "cf222e0f90538d150cc45ae30edf696c", "text": "Workflows are a widely used abstraction for representing large scientific applications and executing them on distributed systems such as clusters, clouds, and grids. However, workflow systems have been largely silent on the question of precisely what environment each task in the workflow is expected to run in. As a result, a workflow may run correctly in the environment in which it was designed, but when moved to another machine, is highly likely to fail due to differences in the operating system, installed applications, available data, and so forth. Lightweight container technology has recently arisen as a potential solution to this problem, by providing a well-defined execution environments at the operating system level. In this paper, we consider how to best integrate container technology into an existing workflow system, using Makeflow, Work Queue, and Docker as examples of current technology. A brief performance study of Docker shows very little overhead in CPU and I/O performance, but significant costs in creating and deleting containers. Taking this into account, we describe four different methods of connecting containers to different points of the infrastructure, and explain several methods of managing the container images that must be distributed to executing tasks. We explore the performance of a large bioinformatics workload on a Docker-enabled cluster, and observe the best configuration to be locally-managed containers that are shared between multiple tasks.", "title": "" }, { "docid": "d2401987609efcb5a7fe420d48dfec1b", "text": "Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets. The Fully Independent Training Conditional (FITC) and the Variational Free Energy (VFE) approximations are two recent popular methods. Despite superficial similarities, these approximations have surprisingly different theoretical properties and behave differently in practice. We thoroughly investigate the two methods for regression both analytically and through illustrative examples, and draw conclusions to guide practical application.", "title": "" }, { "docid": "8e3eec62b02a9cf7a56803775757925f", "text": "Emotional states of individuals, also known as moods, are central to the expression of thoughts, ideas and opinions, and in turn impact attitudes and behavior. As social media tools are increasingly used by individuals to broadcast their day-to-day happenings, or to report on an external event of interest, understanding the rich ‘landscape’ of moods will help us better interpret and make sense of the behavior of millions of individuals. Motivated by literature in psychology, we study a popular representation of human mood landscape, known as the ‘circumplex model’ that characterizes affective experience through two dimensions: valence and activation. We identify more than 200 moods frequent on Twitter, through mechanical turk studies and psychology literature sources, and report on four aspects of mood expression: the relationship between (1) moods and usage levels, including linguistic diversity of shared content (2) moods and the social ties individuals form, (3) moods and amount of network activity of individuals, and (4) moods and participatory patterns of individuals such as link sharing and conversational engagement. Our results provide at-scale naturalistic assessments and extensions of existing conceptualizations of human mood in social media contexts.", "title": "" }, { "docid": "427028ef819df3851e37734e5d198424", "text": "The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.", "title": "" }, { "docid": "774938c175781ed644327db1dae9d1d4", "text": "It is widely accepted that sizing or predicting the volumes of various kinds of software deliverable items is one of the first and most dominant aspects of software cost estimating. Most of the cost estimation model or techniques usually assume that software size or structural complexity is the integral factor that influences software development effort. Although sizing and complexity measure is a very critical due to the need of reliable size estimates in the utilization of existing software project cost estimation models and complex problem for software cost estimating, advances in sizing technology over the past 30 years have been impressive. This paper attempts to review the 12 object-oriented software metrics proposed in 90s’ by Chidamber, Kemerer and Li.", "title": "" } ]
scidocsrr
d38f4b8c76e47fb0fd1a230435245a72
Classifying Conversation in Digital Communication
[ { "docid": "f578c9ea0ac7f28faa3d9864c0e43711", "text": "Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph convolutional networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.", "title": "" } ]
[ { "docid": "106b7450136b9eafdddbaca5131be2f5", "text": "This paper describes the main features of a low cost and compact Ka-band satcom terminal being developed within the ESA-project LOCOMO. The terminal will be compliant with all capacities associated with communication on the move supplying higher quality, better performance and faster speed services than the current available solutions in Ku band. The terminal will be based on a dual polarized low profile Ka-band antenna with TX and RX capabilities.", "title": "" }, { "docid": "3851a77360fb2d6df454c1ee19c59037", "text": "Plantar fasciitis affects nearly 1 million persons in the United States at any one time. Conservative therapies have been reported to successfully treat 90% of plantar fasciitis cases; however, for the remaining cases, only invasive therapeutic solutions remain. This investigation studied newly emerging technology, low-level laser therapy. From September 2011 to June 2013, 69 subjects were enrolled in a placebo-controlled, randomized, double-blind, multicenter study that evaluated the clinical utility of low-level laser therapy for the treatment of unilateral chronic fasciitis. The volunteer participants were treated twice a week for 3 weeks for a total of 6 treatments and were evaluated at 5 separate time points: before the procedure and at weeks 1, 2, 3, 6, and 8. The pain rating was recorded using a visual analog scale, with 0 representing \"no pain\" and 100 representing \"worst pain.\" Additionally, Doppler ultrasonography was performed on the plantar fascia to measure the fascial thickness before and after treatment. Study participants also completed the Foot Function Index. At the final follow-up visit, the group participants demonstrated a mean improvement in heel pain with a visual analog scale score of 29.6 ± 24.9 compared with the placebo subjects, who reported a mean improvement of 5.4 ± 16.0, a statistically significant difference (p < .001). Although additional studies are warranted, these data have demonstrated that low-level laser therapy is a promising treatment of plantar fasciitis.", "title": "" }, { "docid": "4731a95b14335a84f27993666b192bba", "text": "Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.", "title": "" }, { "docid": "c6eb01a11e88dd686a47ca594b424350", "text": "Automatic fake news detection is an important, yet very challenging topic. Traditional methods using lexical features have only very limited success. This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection. Speaker profiles contribute to the model in two ways. One is to include them in the attention model. The other includes them as additional input data. By adding speaker profiles such as party affiliation, speaker title, location and credit history, our model outperforms the state-of-the-art method by 14.5% in accuracy using a benchmark fake news detection dataset. This proves that speaker profiles provide valuable information to validate the credibility of news articles.", "title": "" }, { "docid": "5fe472c30e1dad99628511e03a707aac", "text": "An automatic program that generates constant profit from the financial market is lucrative for every market practitioner. Recent advance in deep reinforcement learning provides a framework toward end-to-end training of such trading agent. In this paper, we propose an Markov Decision Process (MDP) model suitable for the financial trading task and solve it with the state-of-the-art deep recurrent Q-network (DRQN) algorithm. We propose several modifications to the existing learning algorithm to make it more suitable under the financial trading setting, namely 1. We employ a substantially small replay memory (only a few hundreds in size) compared to ones used in modern deep reinforcement learning algorithms (often millions in size.) 2. We develop an action augmentation technique to mitigate the need for random exploration by providing extra feedback signals for all actions to the agent. This enables us to use greedy policy over the course of learning and shows strong empirical performance compared to more commonly used epsilon-greedy exploration. However, this technique is specific to financial trading under a few market assumptions. 3. We sample a longer sequence for recurrent neural network training. A side product of this mechanism is that we can now train the agent for every T steps. This greatly reduces training time since the overall computation is down by a factor of T. We combine all of the above into a complete online learning algorithm and validate our approach on the spot foreign exchange market.", "title": "" }, { "docid": "c6cb6b1cb964d0e2eb8ad344ee4a62b3", "text": "Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address ∗Corresponding Author: Tel: +39 05", "title": "" }, { "docid": "11d06fb5474df44a6bc733bd5cd1263d", "text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.", "title": "" }, { "docid": "fd54d540c30968bb8682a4f2eee43c8d", "text": "This paper presents LISSA (“Learning dashboard for Insights and Support during Study Advice”), a learning analytics dashboard designed, developed, and evaluated in collaboration with study advisers. The overall objective is to facilitate communication between study advisers and students by visualizing grade data that is commonly available in any institution. More specifically, the dashboard attempts to support the dialogue between adviser and student through an overview of study progress, peer comparison, and by triggering insights based on facts as a starting point for discussion and argumentation. We report on the iterative design process and evaluation results of a deployment in 97 advising sessions. We have found that the dashboard supports the current adviser-student dialogue, helps them motivate students, triggers conversation, and provides tools to add personalization, depth, and nuance to the advising session. It provides insights at a factual, interpretative, and reflective level and allows both adviser and student to take an active role during the session.", "title": "" }, { "docid": "0c1001c6195795885604a2aaa24ddb07", "text": "Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user--AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.", "title": "" }, { "docid": "7ee4843ff164a7b7fd096a27b25e2c4d", "text": "Breast cancer remains a significant scientific, clinical and societal challenge. This gap analysis has reviewed and critically assessed enduring issues and new challenges emerging from recent research, and proposes strategies for translating solutions into practice. More than 100 internationally recognised specialist breast cancer scientists, clinicians and healthcare professionals collaborated to address nine thematic areas: genetics, epigenetics and epidemiology; molecular pathology and cell biology; hormonal influences and endocrine therapy; imaging, detection and screening; current/novel therapies and biomarkers; drug resistance; metastasis, angiogenesis, circulating tumour cells, cancer ‘stem’ cells; risk and prevention; living with and managing breast cancer and its treatment. The groups developed summary papers through an iterative process which, following further appraisal from experts and patients, were melded into this summary account. The 10 major gaps identified were: (1) understanding the functions and contextual interactions of genetic and epigenetic changes in normal breast development and during malignant transformation; (2) how to implement sustainable lifestyle changes (diet, exercise and weight) and chemopreventive strategies; (3) the need for tailored screening approaches including clinically actionable tests; (4) enhancing knowledge of molecular drivers behind breast cancer subtypes, progression and metastasis; (5) understanding the molecular mechanisms of tumour heterogeneity, dormancy, de novo or acquired resistance and how to target key nodes in these dynamic processes; (6) developing validated markers for chemosensitivity and radiosensitivity; (7) understanding the optimal duration, sequencing and rational combinations of treatment for improved personalised therapy; (8) validating multimodality imaging biomarkers for minimally invasive diagnosis and monitoring of responses in primary and metastatic disease; (9) developing interventions and support to improve the survivorship experience; (10) a continuing need for clinical material for translational research derived from normal breast, blood, primary, relapsed, metastatic and drug-resistant cancers with expert bioinformatics support to maximise its utility. The proposed infrastructural enablers include enhanced resources to support clinically relevant in vitro and in vivo tumour models; improved access to appropriate, fully annotated clinical samples; extended biomarker discovery, validation and standardisation; and facilitated cross-discipline working. With resources to conduct further high-quality targeted research focusing on the gaps identified, increased knowledge translating into improved clinical care should be achievable within five years.", "title": "" }, { "docid": "e34f38c3c73f3e4c41ac44bc81d86ab7", "text": "Euler number of a binary image is a fundamental topological feature that remains invariant under translation, rotation, scaling, and rubber-sheet transformation of the image. In this work, a run-based method for computing Euler number is formulated and a new hardware implementation is described. Analysis of time complexity and performance measure is provided to demonstrate the efficiency of the method. The sequential version of the proposed algorithm requires significantly fewer number of pixel accesses compared to the existing methods and tools based on bit-quad counting or quad-tree, both for the worst case and the average case. A pipelined architecture is designed with a single adder tree to implement the algorithm on-chip by exploiting its inherent parallelism. The architecture uses O(N) 2-input gates and requires O(N logN) time to compute the Euler number of an N · N image. The same hardware, with minor modification, can be used to handle arbitrarily large pixel matrices. A standard cell based VLSI implementation of the architecture is also reported. As Euler number is a widely used parameter, the proposed design can be readily used to save computation time in many image processing applications. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "bd059d97916f4c34d6f6320c3b168b7d", "text": "Autophagy degrades cytoplasmic components and is important for development and human health. Although autophagy is known to be influenced by systemic intercellular signals, the proteins that control autophagy are largely thought to function within individual cells. Here, we report that Drosophila macroglobulin complement-related (Mcr), a complement ortholog, plays an essential role during developmental cell death and inflammation by influencing autophagy in neighboring cells. This function of Mcr involves the immune receptor Draper, suggesting a relationship between autophagy and the control of inflammation. Interestingly, Mcr function in epithelial cells is required for macrophage autophagy and migration to epithelial wounds, a Draper-dependent process. This study reveals, unexpectedly, that complement-related from one cell regulates autophagy in neighboring cells via an ancient immune signaling program.", "title": "" }, { "docid": "72c164c281e98386a054a25677c21065", "text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.", "title": "" }, { "docid": "5781bae1fdda2d2acc87102960dab3ed", "text": "Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them. In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.", "title": "" }, { "docid": "05d282026dcecb3286c9ffbd88cb72a3", "text": "Although deep neural networks (DNNs) are state-of-the-art artificial intelligence systems, it is unclear what insights, if any, they provide about human intelligence. We address this issue in the domain of visual perception. After briefly describing DNNs, we provide an overview of recent results comparing human visual representations and performance with those of DNNs. In many cases, DNNs acquire visual representations and processing strategies that are very different from those used by people. We conjecture that there are at least two factors preventing them from serving as better psychological models. First, DNNs are currently trained with impoverished data, such as data lacking important visual cues to three-dimensional structure, data lacking multisensory statistical regularities, and data in which stimuli are unconnected to an observer’s actions and goals. Second, DNNs typically lack adaptations to capacity limits, such as attentional mechanisms, visual working memory, and compressed mental representations biased toward preserving task-relevant abstractions.", "title": "" }, { "docid": "f975aff622406ca7e563f60e8488f6fa", "text": "Analog-to-digital converter (ADC)-based multi-Gb/s serial link receivers have gained increasing attention in the backplane community due to the desire for higher I/O throughput, ease of design portability, and flexibility. However, the power dissipation in such receivers is dominated by the ADC. ADCs in serial links employ signal-to-noise-and-distortion ratio (SNDR) and effective-number-of-bit (ENOB) as performance metrics as these are the standard for generic ADC design. This paper studies the use of information-based metrics such as bit-error-rate (BER) to design a BER-optimal ADC (BOA) for serial links. Channel parameters such as the m-clustering value and the threshold non-uniformity metric ht are introduced and employed to quantify the BER improvement achieved by a BOA over a conventional uniform ADC (CUA) in a receiver. Analytical expressions for BER improvement are derived and validated through simulations. A prototype BOA is designed, fabricated and tested in a 1.2 V, 90 nm LP CMOS process to verify the results of this study. BOA's variable-threshold and variable-resolution configurations are implemented via an 8-bit single-core, multiple-output passive digital-to-analog converter (DAC), which incurs an additional power overhead of <; 0.1% (approximately 50 μW). Measurement results show examples in which the BER achieved by the 3-bit BOA receiver is lower by a factor of 109 and 1010, as compared to the 4-bit and 3-bit CUA receivers, respectively, at a data rate of 4-Gb/s and a transmitted signal amplitude of 180 mVppd.", "title": "" }, { "docid": "d669dfcdc2486314bd7234e1f42357de", "text": "The Luneburg lens (LL) represents a very attractive candidate for many applications such as multibeam antennas, multifrequency scanning, and spatial scanning, due to its focusing properties. Indeed, it is a dielectric sphere on which each surface point is a frequency-independent perfect focusing point. This is produced by its index governing law n, which follows the radial distribution n/sup 2/=2-r/sup 2/, where r is the normalized radial position. Practically, an LL is manufactured as a finite number of concentric homogeneous dielectric shells - this is called a discrete LL. The inaccuracies in the curved shell manufacturing process produce intershell air gaps, which degrade the performance of the lens. Furthermore, this requires different materials whose relative dielectric constant covers the range 1-2. The paper proposes a new LL manufacturing process to avoid these drawbacks. The paper describe the theoretical background and the performance of the obtained lens.", "title": "" }, { "docid": "3ed823504a503fd7148daae3f23190db", "text": "The ultimate goal of most biomedical research is to gain greater insight into mechanisms of human disease or to develop new and improved therapies or diagnostics. Although great advances have been made in terms of developing disease models in animals, such as transgenic mice, many of these models fail to faithfully recapitulate the human condition. In addition, it is difficult to identify critical cellular and molecular contributors to disease or to vary them independently in whole-animal models. This challenge has attracted the interest of engineers, who have begun to collaborate with biologists to leverage recent advances in tissue engineering and microfabrication to develop novel in vitro models of disease. As these models are synthetic systems, specific molecular factors and individual cell types, including parenchymal cells, vascular cells, and immune cells, can be varied independently while simultaneously measuring system-level responses in real time. In this article, we provide some examples of these efforts, including engineered models of diseases of the heart, lung, intestine, liver, kidney, cartilage, skin and vascular, endocrine, musculoskeletal, and nervous systems, as well as models of infectious diseases and cancer. We also describe how engineered in vitro models can be combined with human inducible pluripotent stem cells to enable new insights into a broad variety of disease mechanisms, as well as provide a test bed for screening new therapies.", "title": "" }, { "docid": "a7f72b95da401ee4f710eb019652bb03", "text": "Recurrent Neural Network (RNN) are a popular choice for modeling temporal and sequential tasks and achieve many state-of-the-art performance on various complex problems. However, most of the state-of-the-art RNNs have millions of parameters and require many computational resources for training and predicting new data. This paper proposes an alternative RNN model to reduce the number of parameters significantly by representing the weight parameters based on Tensor Train (TT) format. In this paper, we implement the TT-format representation for several RNN architectures such as simple RNN and Gated Recurrent Unit (GRU). We compare and evaluate our proposed RNN model with uncompressed RNN model on sequence classification and sequence prediction tasks. Our proposed RNNs with TT-format are able to preserve the performance while reducing the number of RNN parameters significantly up to 40 times smaller.", "title": "" }, { "docid": "bde9e26746ddcc6e53f442a0e400a57e", "text": "Aljebreen, Mohammed, \"Implementing a dynamic scaling of web applications in a virtualized cloud computing environment\" (2013). Abstract Cloud computing is becoming more essential day by day. The allure of the cloud is the significant value and benefits that people gain from it, such as reduced costs, increased storage, flexibility, and more mobility. Flexibility is one of the major benefits that cloud computing can provide in terms of scaling up and down the infrastructure of a network. Once traffic has increased on one server within the network, a load balancer instance will route incoming requests to a healthy instance, which is less busy and less burdened. When the full complement of instances cannot handle any more requests, past research has been done by Chieu et. al. that presented a scaling algorithm to address a dynamic scalability of web applications on a virtualized cloud computing environment based on relevant indicators that can increase or decrease servers, as needed. In this project, I implemented the proposed algorithm, but based on CPU Utilization threshold. In addition, two tests were run exploring the capabilities of different metrics when faced with ideal or challenging conditions. The results did find a superior metric that was able to perform successfully under both tests. 3 Dedication I lovingly dedicate this thesis to my gracious and devoted mother for her unwavering love and for always believing in me. 4 Acknowledgments This thesis would not have been possible without the support of many people. My wish is to express humble gratitude to the committee chair, Prof. Sharon Mason, who was perpetually generous in offering her invaluable assistance, support, and guidance. Deepest gratitude is also due to the members of my supervisory committee, Prof. Lawrence Hill and Prof. Jim Leone, without whose knowledge and direction this study would not have been successful. Special thanks also to Prof. Charles Border for his financial support of this thesis and priceless assistance. Profound gratitude to my mother, Moneerah, who has been there from the very beginning, for her support and endless love. I would also like to convey thanks to my wife for her patient and unending encouragement and support throughout the duration of my studies; without my wife's encouragement, I would not have completed this degree. I wish to express my gratitude to my beloved sister and brothers for their kind understanding throughout my studies. Special thanks to my friend, Mohammed Almathami, for his …", "title": "" } ]
scidocsrr
1280c28733d7b491a9e2a3178e19bce3
Force Generation by Parallel Combinations of Fiber-Reinforced Fluid-Driven Actuators
[ { "docid": "03a8635fcb64117d5a2a6f890c2b03b5", "text": "This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.", "title": "" }, { "docid": "e259e255f9acf3fa1e1429082e1bf1de", "text": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion.", "title": "" }, { "docid": "b6ceacf3ad3773acddc3452933b57a0f", "text": "The growing interest in robots that interact safely with humans and surroundings have prompted the need for soft structural embodiments including soft actuators. This paper explores a class of soft actuators inspired in design and construction by Pneumatic Artificial Muscles (PAMs) or McKibben Actuators. These bio-inspired actuators consist of fluid-filled elastomeric enclosures that are reinforced with fibers along a specified orientation and are in general referred to as Fiber-Reinforced Elastomeric Enclosures (FREEs). Several recent efforts have mapped the fiber configurations to instantaneous deformation, forces, and moments generated by these actuators upon pressurization with fluid. However most of the actuators, when deployed undergo large deformations and large overall motions thus necessitating the study of their large-deformation kinematics. This paper analyzes the large deformation kinematics of FREEs. A concept called configuration memory effect is proposed to explain the smart nature of these actuators. This behavior is tested with experiments and finite element modeling for a small sample of actuators. The paper also describes different possibilities and design implications of the large deformation behavior of FREEs in successful creation of soft robots.", "title": "" } ]
[ { "docid": "e28b8c08275947f0908f64d117f5dc8e", "text": "We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed approach, we conduct experiments on two types of datasets. Experimental results on two datasets validate the efficiency of our MCAE model and our methodology of generating synthetic data.", "title": "" }, { "docid": "f59fd6af9dea570b49c453de02297f4c", "text": "OBJECTIVES\nThe role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data.\n\n\nMETHODOLOGY\nSocial media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported.\n\n\nDATA SETS\nThree data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts.\n\n\nEVALUATIONS\nTwo sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media.\n\n\nFINDINGS\nThe small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average.", "title": "" }, { "docid": "4e006cd320506a5ef244eedd3f761756", "text": "Document classification is a growing interest in the research of text mining. Correctly identifying the documents into particular category is still presenting challenge because of large and vast amount of features in the dataset. In regards to the existing classifying approaches, Naïve Bayes is potentially good at serving as a document classification model due to its simplicity. The aim of this paper is to highlight the performance of employing Naïve Bayes in document classification. Results show that Naïve Bayes is the best classifiers against several common classifiers (such as decision tree, neural network, and support vector machines) in term of accuracy and computational efficiency.", "title": "" }, { "docid": "5aebd19c78b6b24c612e20970c27044f", "text": "The concept of alignment or fit between information technology (IT) and business strategy has been discussed for many years, and strategic alignment is deemed crucial in increasing firm performance. Yet few attempts have been made to investigate the factors that influence alignment, especially in the context of small and medium sized firms (SMEs). This issue is important because results from previous studies suggest that many firms struggle to achieve alignment. Therefore, this study sought to identify different levels of alignment and then investigated the factors that influence alignment. In particular, it focused on the alignment between the requirements for accounting information (AIS requirements) and the capacity of accounting systems (AIS capacity) to generate the information, in the specific context of manufacturing SMEs in Malaysia. Using a mail questionnaire, data from 214 firms was collected on nineteen accounting information characteristics for both requirements and capacity. The fit between these two sets was explored using the moderation approach and evidence was gained that AIS alignment in some firms was high. Cluster analysis was used to find two sets of groups which could be considered more aligned and less aligned. The study then investigated some factors that might be associated with a small firm’s level of AIS alignment. Findings from the study suggest that AIS alignment was related to the firm’s: level of IT maturity; level of owner/manager’s accounting and IT knowledge; use of expertise from government agencies and accounting firms; and existence of internal IT staff.", "title": "" }, { "docid": "ccbb7e753b974951bb658b63e91431bb", "text": "In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.", "title": "" }, { "docid": "1c6a14765f2fefd517b174fdc4f9e45b", "text": "Epilepsy affects 65 million people worldwide and entails a major burden in seizure-related disability, mortality, comorbidities, stigma, and costs. In the past decade, important advances have been made in the understanding of the pathophysiological mechanisms of the disease and factors affecting its prognosis. These advances have translated into new conceptual and operational definitions of epilepsy in addition to revised criteria and terminology for its diagnosis and classification. Although the number of available antiepileptic drugs has increased substantially during the past 20 years, about a third of patients remain resistant to medical treatment. Despite improved effectiveness of surgical procedures, with more than half of operated patients achieving long-term freedom from seizures, epilepsy surgery is still done in a small subset of drug-resistant patients. The lives of most people with epilepsy continue to be adversely affected by gaps in knowledge, diagnosis, treatment, advocacy, education, legislation, and research. Concerted actions to address these challenges are urgently needed.", "title": "" }, { "docid": "c6e1c8aa6633ec4f05240de1a3793912", "text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.", "title": "" }, { "docid": "7200c6c09c38e2fb363360ae8bb473ff", "text": "This work describes autofluorescence of the mycelium of the dry rot fungus Serpula lacrymans grown on spruce wood blocks impregnated with various metals. Live mycelium, as opposed to dead mycelium, exhibited yellow autofluorescence upon blue excitation, blue fluorescence with ultraviolet (UV) excitation, orange-red and light-blue fluorescence with violet excitation, and red fluorescence with green excitation. Distinctive autofluorescence was observed in the fungal cell wall and in granula localized in the cytoplasm. In dead mycelium, the intensity of autofluorescence decreased and the signal was diffused throughout the cytoplasm. Metal treatment affected both the color and intensity of autofluorescence and also the morphology of the mycelium. The strongest yellow signal was observed with blue excitation in Cd-treated samples, in conjunction with increased branching and the formation of mycelial loops and protrusions. For the first time, we describe pink autofluorescence that was observed in Mn-, Zn-, and Cu-treated samples with UV, violet or. blue excitation. The lowest signals were obtained in Cu- and Fe-treated samples. Chitin, an important part of the fungal cell wall exhibited intensive primary fluorescence with UV, violet, blue, and green excitation.", "title": "" }, { "docid": "e0632c86f648a36f083b56d534746c02", "text": "At present, the brain is viewed primarily as a biological computer. But, crucially, the plasticity of the brain’s structure leads it to vary in functionally significant ways across individuals. Understanding the brain necessitates an understanding of the range of such variation. For example, the number of neurons in the brain and its finer structures impose inherent limitations on the functionality it can realize. The relationship between such quantitative limits on the resources available and the computations that are feasible with such resources is the subject of study in computational complexity theory. Computational complexity is a potentially useful conceptual framework because it enables the meaningful study of the family of possible structures as a whole—the study of “the brain,” as opposed to some particular brain. The language of computational complexity also provides a means of formally capturing capabilities of the brain, which may otherwise be philosophically thorny.", "title": "" }, { "docid": "a972153f00c01f918f335d0877029184", "text": "Direct volume rendering offers the opportunity to visualize all of a three-dimensional sample volume in one image. However, processing such images can be very expensive and good quality high-resolution images are far from interactive. Projection approaches to direct volume rendering process the volume region by region as opposed to ray-casting methods that process it ray by ray. Projection approaches have generated interest because they use coherence to provide greater speed than ray casting and generate the image in a layered, informative fashion. This paper discusses two topics: First, it introduces a projection approach for directly rendering rectilinear, parallel-projected sample volumes that takes advantage of coherence across cells and the identical shape of their projection. Second, it considers the repercussions of various methods of integration in depth and interpolation across the scan plane. Some of these methods take advantage of Gouraud-shading hardware, with advantages in speed but potential disadvantages in image quality.", "title": "" }, { "docid": "1e6310e8b16625e8f8319c7386723e55", "text": "Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention.\n We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.", "title": "" }, { "docid": "613057956e5c40e1257ece734bbe5246", "text": "In this paper, we prove some convergence properties for a class of ant colony optimization algorithms. In particular, we prove that for any small constant 0 and for a sufficiently large number of algorithm iterations , the probability of finding an optimal solution at least once is ( ) 1 and that this probability tends to 1 for . We also prove that, after an optimal solution has been found, it takes a finite number of iterations for the pheromone trails associated to the found optimal solution to grow higher than any other pheromone trail and that, for , any fixed ant will produce the optimal solution during the th iteration with probability 1 (̂ min max), where min and max are the minimum and maximum values that can be taken by pheromone trails.", "title": "" }, { "docid": "dc783054dac29af7d08cee0a13259a8d", "text": "This paper develops a novel flexible capacitive tactile sensor array for prosthetic hand gripping force measurement. The sensor array has 8 × 8 (= 64) sensing units, each sensing unit has a four-layered structure: two thick PET layers with embedded copper electrodes generates a capacitor, a PDMS film with line-structure used as an insulation layer, and a top PDMS bump layer to concentrate external force. The structural design, working principle, and fabrication process of this sensor array are presented. The fabricated tactile sensor array features high flexibility has a spatial resolution of 2 mm. This is followed by the characterization of the sensing unit for normal force measurement and found that the sensing unit has two sensitivities: 4.82 0/00/mN for small contact force and 0.23 0/00/mN for large gripping force measurements. Finally, the tactile sensor array is integrated into a prosthetic hand for gripping force measurement. Results showed that the developed flexible capacitive tactile sensor array could be utilized for tactile sensing and real-time contact force visualization for prosthetic hand gripping applications.", "title": "" }, { "docid": "1facd226c134b22f62613073deffce60", "text": "We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.", "title": "" }, { "docid": "c7d80cd2f45eeea465c22c9d17c3af36", "text": "In this article, a shifted Legendre tau method is introduced to get a direct solution technique for solving multi-order fractional differential equations (FDEs) with constant coefficients subject to multi-point boundary conditions. The fractional derivative is described in the Caputo sense. Also, this article reports a systematic quadrature tau method for numerically solving multi-point boundary value problems of fractional-order with variable coefficients. Here the approximation is based on shifted Legendre polynomials and the quadrature rule is treated on shifted Legendre Gauss-Lobatto points. We also present a Gauss-Lobatto shifted Legendre collocation method for solving nonlinear multi-order FDEs with multi-point boundary conditions. The main characteristic behind this approach is that it reduces such problem to those of solving a system of algebraic equations. Thus we can find directly the spectral solution of the proposed problem. Through several numerical examples, we evaluate the accuracy and performance of the proposed algorithms.", "title": "" }, { "docid": "905d760630c3c020bcac0174885afd72", "text": "Component containers are a key part of mainstream component technologies, and play an important role in separating non-functional concerns from the core component logic. This paper addresses two different aspects of containers. First, it shows how generative programming techniques, using AspectC++ and meta-programming, can be used to generate stubs and skeletons without the need for special compilers or interface description languages. Second, the paper describes an approach to create custom containers by composing different non-functional features. Unlike component technologies such as EJB, which only support a predefined set of container types, this approach allows different combinations of non-functional features to be composed in a container to meet the application needs.", "title": "" }, { "docid": "75c2b1565c61136bf014d5e67eb52daf", "text": "This paper describes a system for dense depth estimation for multiple images in real-time. The algorithm runs almost entirely on standard graphics hardware, leaving the main CPU free for other tasks as image capture, compression and storage during scene capture. We follow a plain-sweep approach extended by truncated SSD scores, shiftable windows and best camera selection. We do not need specialized hardware and exploit the computational power of freely programmable PC graphics hardware. Dense depth maps are computed with up to 20 fps.", "title": "" }, { "docid": "c5bc51e3e2ad5aedccfa17095ec1d7ed", "text": "CONTEXT\nLittle is known about the extent or severity of untreated mental disorders, especially in less-developed countries.\n\n\nOBJECTIVE\nTo estimate prevalence, severity, and treatment of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) mental disorders in 14 countries (6 less developed, 8 developed) in the World Health Organization (WHO) World Mental Health (WMH) Survey Initiative.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nFace-to-face household surveys of 60 463 community adults conducted from 2001-2003 in 14 countries in the Americas, Europe, the Middle East, Africa, and Asia.\n\n\nMAIN OUTCOME MEASURES\nThe DSM-IV disorders, severity, and treatment were assessed with the WMH version of the WHO Composite International Diagnostic Interview (WMH-CIDI), a fully structured, lay-administered psychiatric diagnostic interview.\n\n\nRESULTS\nThe prevalence of having any WMH-CIDI/DSM-IV disorder in the prior year varied widely, from 4.3% in Shanghai to 26.4% in the United States, with an interquartile range (IQR) of 9.1%-16.9%. Between 33.1% (Colombia) and 80.9% (Nigeria) of 12-month cases were mild (IQR, 40.2%-53.3%). Serious disorders were associated with substantial role disability. Although disorder severity was correlated with probability of treatment in almost all countries, 35.5% to 50.3% of serious cases in developed countries and 76.3% to 85.4% in less-developed countries received no treatment in the 12 months before the interview. Due to the high prevalence of mild and subthreshold cases, the number of those who received treatment far exceeds the number of untreated serious cases in every country.\n\n\nCONCLUSIONS\nReallocation of treatment resources could substantially decrease the problem of unmet need for treatment of mental disorders among serious cases. Structural barriers exist to this reallocation. Careful consideration needs to be given to the value of treating some mild cases, especially those at risk for progressing to more serious disorders.", "title": "" }, { "docid": "1b1953e3dd28c67e7a8648392422df88", "text": "We examined Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) General Ability Index (GAI) and Full Scale Intelligence Quotient (FSIQ) discrepancies in 100 epilepsy patients; 44% had a significant GAI > FSIQ discrepancy. GAI-FSIQ discrepancies were correlated with the number of antiepileptic drugs taken and duration of epilepsy. Individual antiepileptic drugs differentially interfere with the expression of underlying intellectual ability in this group. FSIQ may significantly underestimate levels of general intellectual ability in people with epilepsy. Inaccurate representations of FSIQ due to selective impairments in working memory and reduced processing speed obscure the contextual interpretation of performance on other neuropsychological tests, and subtle localizing and lateralizing signs may be missed as a result.", "title": "" }, { "docid": "30aaf753d3ec72f07d4838de391524ca", "text": "The present study was aimed to determine the effect on liver, associated oxidative stress, trace element and vitamin alteration in dogs with sarcoptic mange. A total of 24 dogs with clinically established diagnosis of sarcoptic mange, divided into two groups, severely infested group (n=9) and mild/moderately infested group (n=15), according to the extent of skin lesions caused by sarcoptic mange and 6 dogs as control group were included in the present study. In comparison to healthy control hemoglobin, PCV, and TEC were significantly (P<0.05) decreased in dogs with sarcoptic mange however, significant increase in TLC along with neutrophilia and lymphopenia was observed only in severely infested dogs. The albumin, glucose and cholesterol were significantly (P<0.05) decreased and globulin, ALT, AST and bilirubin were significantly (P<0.05) increased in severely infested dogs when compared to other two groups. Malondialdehyde (MDA) levels were significantly (P<0.01) higher in dogs with sarcoptic mange, with levels highest in severely infested groups. Activity of superoxide dismutase (SOD) (P<0.05) and catalase were significantly (P<0.01) lower in sarcoptic infested dogs when compared with the healthy control group. Zinc and copper levels in dogs with sarcoptic mange were significantly (P<0.05) lower when compared with healthy control group with the levels lowest in severely infested group. Vitamin A and vitamin C levels were significantly (P<0.05) lower in sarcoptic infested dogs when compared to healthy control. From the present study, it was concluded that sarcoptic mange in dogs affects the liver and the infestation is associated with oxidant/anti-oxidant imbalance, significant alteration in trace elements and vitamins.", "title": "" } ]
scidocsrr
88f3fe0dca0f76febdb3f4f42363cfae
Bitcoin Beacon
[ { "docid": "f2a66fb35153e7e10d93fac5c8d29374", "text": "A widespread security claim of the Bitcoin system, presented in the original Bitcoin white-paper, states that the security of the system is guaranteed as long as there is no attacker in possession of half or more of the total computational power used to maintain the system. This claim, however, is proved based on theoretically awed assumptions. In the paper we analyze two kinds of attacks based on two theoretical aws: the Block Discarding Attack and the Di culty Raising Attack. We argue that the current theoretical limit of attacker's fraction of total computational power essential for the security of the system is in a sense not 1 2 but a bit less than 1 4 , and outline proposals for protocol change that can raise this limit to be as close to 1 2 as we want. The basic idea of the Block Discarding Attack has been noted as early as 2010, and lately was independently though-of and analyzed by both author of this paper and authors of a most recently pre-print published paper. We thus focus on the major di erences of our analysis, and try to explain the unfortunate surprising coincidence. To the best of our knowledge, the second attack is presented here for the rst time.", "title": "" } ]
[ { "docid": "9963e1f7126812d9111a4cb6a8eb8dc6", "text": "The renewed interest in grapheme to phoneme conversion (G2P), due to the need of developing multilingual speech synthesizers and recognizers, suggests new approaches more efficient than the traditional rule&exception ones. A number of studies have been performed to investigate the possible use of machine learning techniques to extract phonetic knowledge in a automatic way starting from a lexicon. In this paper, we present the results of our experiments in this research field. Starting from the state of art, our contribution is in the development of a language-independent learning scheme for G2P based on Classification and Regression Trees (CART). To validate our approach, we realized G2P converters for the following languages: British English, American English, French and Brazilian Portuguese.", "title": "" }, { "docid": "e08990fec382e1ba5c089d8bc1629bc5", "text": "Goal-oriented spoken dialogue systems have been the most prominent component in todays virtual personal assistants, which allow users to speak naturally in order to finish tasks more efficiently. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw. 1 Tutorial Overview With the rising trend of artificial intelligence, more and more devices have incorporated goal-oriented spoken dialogue systems. Among popular virtual personal assistants, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, and Google Assistant have incorporated dialogue system modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. Traditional conversational systems have rather complex and/or modular pipelines. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. The goal of this tutorial is to provide the audience with the developing trend of dialogue systems, and a roadmap to get them started with the related work. The first section motivates the work on conversationbased intelligent agents, in which the core underlying system is task-oriented dialogue systems. The following section describes different approaches using deep learning for each component in the dialogue system and how it is evaluated. The last two sections focus on discussing the recent trends and current challenges on dialogue system technology and summarize the challenges and conclusions. The detailed content is described as follows. 2 Dialogue System Basics This section will motivate the work on conversation-based intelligent agents, in which the core underlying system is task-oriented spoken dialogue systems. The section starts with an overview of the standard pipeline framework for dialogue system illustrated in Figure 1 (Tur and De Mori, 2011). Basic components of a dialog system are automatic speech recognition (ASR), language understanding (LU), dialogue management (DM), and natural language generation (NLG) (Rudnicky et al., 1999; Zue et al., 2000; Zue and Glass, 2000). This tutorial will mainly focus on LU, DM, and NLG parts.", "title": "" }, { "docid": "60114bebc1b64a3bfd5dc010a1a4891c", "text": "Attachment anxiety is expected to be positively associated with dependence and self-criticism. However, attachment avoidance is expected to be negatively associated with dependence but positively associated with self-criticism. Both dependence and self-criticism are expected to be related to depressive symptoms. Data were analyzed from 424 undergraduate participants at a large Midwestern university, using structural equation modeling. Results indicated that the relation between attachment anxiety and depressive symptoms was fully mediated by dependence and self-criticism, whereas the relation between attachment avoidance and depressive symptoms was partially mediated by dependence and self-criticism. Moreover, through a multiple-group comparison analysis, the results indicated that men with high levels of attachment avoidance are more likely than women to be self-critical.", "title": "" }, { "docid": "2892a61cd6097e4bf1f580a0f36e8a9e", "text": "In this paper, a low-power full-band low-noise amplifier (FB-LNA) for ultra-wideband applications is presented. The proposed FB-LNA uses a stagger-tuning technique to extend the full bandwidth from 3.1 to 10.6 GHz. A current-reused architecture is employed to decrease the power consumption. By using an input common-gate stage, the input resistance of 50 Ω can be obtained without an extra input-matching network. The output matching is achieved by cascading an output common-drain stage. FB-LNA was implemented with a TSMC 0.18-μm CMOS process. On-wafer measurement shows an average power gain of 9.7 dB within the full operation band. The input reflection coefficient and the output reflection coefficient are both less than -10 dB over the entire band. The noise figure of the full band remained under 7 dB with a minimum value of 5.27 dB. The linearity of input third-order intercept point is -2.23 dBm. The power consumptions at 1.5-V supply voltage without an output buffer is 4.5 mW. The chip area occupies 1.17 × 0.88 mm2.", "title": "" }, { "docid": "a412cff5999d0c257562335465a28323", "text": "In transfer learning, what and how to transfer are two primary issues to be addressed, as different transfer learning algorithms applied between a source and a target domain result in different knowledge transferred and thereby the performance improvement in the target domain. Determining the optimal one that maximizes the performance improvement requires either exhaustive exploration or considerable expertise. Meanwhile, it is widely accepted in educational psychology that human beings improve transfer learning skills of deciding what to transfer through meta-cognitive reflection on inductive transfer learning practices. Motivated by this, we propose a novel transfer learning framework known as Learning to Transfer (L2T) to automatically determine what and how to transfer are the best by leveraging previous transfer learning experiences. We establish the L2T framework in two stages: 1) we learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer are the best for a future pair of domains by optimizing the reflection function. We also theoretically analyse the algorithmic stability and generalization bound of L2T, and empirically demonstrate its superiority over several state-ofthe-art transfer learning algorithms.", "title": "" }, { "docid": "79910e1dadf52be1b278d2e57d9bdb9e", "text": "Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.", "title": "" }, { "docid": "ce48548c0004b074b18f95792f3e6ce8", "text": "In this paper, we study domain adaptation with a state-of-the-art hierarchical neural network for document-level sentiment classification. We first design a new auxiliary task based on sentiment scores of domain-independent words. We then propose two neural network architectures to respectively induce document embeddings and sentence embeddings that work well for different domains. When these document and sentence embeddings are used for sentiment classification, we find that with both pseudo and external sentiment lexicons, our proposed methods can perform similarly to or better than several highly competitive domain adaptation methods on a benchmark dataset of product reviews.", "title": "" }, { "docid": "63262d2a9abdca1d39e31d9937bb41cf", "text": "A structural model is presented for synthesizing binaural sound from a monaural source. The model produces well-controlled vertical as well as horizontal effects. The model is based on a simplified time-domain description of the physics of wave propagation and diffraction. The components of the model have a one-to-one correspondence with the physical sources of sound diffraction, delay, and reflection. The simplicity of the model permits efficient implementation in DSP hardware, and thus facilitates real-time operation. Additionally, the parameters in the model can be adjusted to fit a particular individual’s characteristics, thereby producing individualized head-related transfer functions. Experimental tests verify the perceptual effectiveness of the approach.", "title": "" }, { "docid": "ab677299ffa1e6ae0f65daf5de75d66c", "text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.", "title": "" }, { "docid": "39c097ba72618ccc901e714b855d3048", "text": "In this paper we present a pattern for growth mindset development. We believe that students can be taught to positively change their mindset, where experience, training, and personal effort can add to a unique student's genetic endowment. We use our long years' experience and synthesized facilitation methods and techniques to assess insight mentoring and to improve it through growth mindset development. These can help students make creative changes in their life and see the world with new eyes in a new way. The pattern allows developing a growth mindset and improving our lives and the lives of those around us.", "title": "" }, { "docid": "ced688e5215ba23fd8bcb8c2ba8584d3", "text": "N2pc is generally interpreted as the electrocortical correlate of the distractor-suppression mechanisms through which attention selection takes place in humans. Here, we present data that challenge this common N2pc interpretation. In Experiment 1, multiple distractors induced greater N2pc amplitudes even when they facilitated target identification, despite the suppression account of the N2pc predicted the contrary; in Experiment 2, spatial proximity between target and distractors did not affect the N2pc amplitude, despite resulting in more interference in response times; in Experiment 3, heterogeneous distractors delayed response times but did not elicit a greater N2pc relative to homogeneous distractors again in contrast with what would have predicted the suppression hypothesis. These results do not support the notion that the N2pc unequivocally mirrors distractor-suppression processes. We propose that the N2pc indexes mechanisms involved in identifying and localizing relevant stimuli in the scene through enhancement of their features and not suppression of distractors.", "title": "" }, { "docid": "e8ac779e821b27e7cb7fb63716bc1024", "text": "Misogynist abuse has now become serious enough to attract attention from scholars of Law [7]. Social network platform providers have been forced to address this issue, such that Twitter is now very clear about what constitutes abusive behaviour, and has responded by updating their trust and safety rules [16].", "title": "" }, { "docid": "b06653abc5e287c72fc68247610ef76a", "text": "Radio Frequency Identification (RFID) is name given to technology that uses tags, readers and backend servers to form a system that has numerous applications in many areas, much discovered and rest are to be explored. Before implementing RFID system security issues must be considered carefully, taking not care of security issues could lead to severe consequences. This paper is overview of Introduction to RFID, RFID Fundamentals, basic structure of a RFID system, some of its numerous applications, security issues and their remedies.", "title": "" }, { "docid": "a37fa6118f4ff2e92977186ec7d5c3c6", "text": "The determination of prices is a key function of markets yet it is just beginning to be studied by sociologists. Most theories view prices as a consequence of economic processes. By contrast, we consider how social structure shapes prices. Building on embeddedness arguments and original fieldwork at large law firms, we propose that a firm’s embedded relationships influence prices by prompting private information flows and informal governance arrangements that add unique value to goods and services. We test our arguments with a separate longitudinal dataset on the pricing of legal services by law firms that represent corporate America. We find that embeddedness can significantly increase and decrease prices net of standard variables and in markets for both complex and routine legal services. Moreover, results show that three forms of embeddedness embedded ties, board memberships, and status affect prices in different directions and have different magnitudes of effects that depend on the complexity of the legal service.", "title": "" }, { "docid": "77ff4bd27b795212d355162822fc0cdc", "text": "We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image. There are several technical challenges to this, such as occlusions, lack of calibration data and the scale ambiguity between object size and distance. These have not been addressed in full generality in previous work. Here we propose to tackle these issues by building upon advances in object recognition and using recently created large-scale datasets. We first introduce the task of amodal bounding box completion, which aims to infer the the full extent of the object instances in the image. We then propose a probabilistic framework for learning category-specific object size distributions from available annotations and leverage these in conjunction with amodal completions to infer veridical sizes of objects in novel images. Finally, we introduce a focal length prediction approach that exploits scene recognition to overcome inherent scale ambiguities and demonstrate qualitative results on challenging real-world scenes.", "title": "" }, { "docid": "b4b66392aec0c4e00eb6b1cabbe22499", "text": "ADJ: Adjectives that occur with the NP CMC: Orthographic features of the NP CPL: Phrases that occur with the NP VERB: Verbs that appear with the NP Task: Predict whether a noun phrase (NP) belongs to a category (e.g. “city”) Category # Examples animal 20,733 beverage 18,932 bird 19,263 bodypart 21,840 city 21,778 disease 21,827 drug 20,452 fish 19,162 food 19,566 fruit 18,911 muscle 21,606 person 21,700 protein 21,811 river 21,723 vegetable 18,826", "title": "" }, { "docid": "88bc4f8a24a2e81a9c133d11a048ca10", "text": "In this paper, we give an overview of the HDF5 technology suite and some of its applications. We discuss the HDF5 data model, the HDF5 software architecture and some of its performance enhancing capabilities.", "title": "" }, { "docid": "c9b0954503fa8b6309a0736ac1a5cb62", "text": "Rigid Point Cloud Registration (PCReg) refers to the problem of finding the rigid transformation between two sets of point clouds. This problem is particularly important due to the advances in new 3D sensing hardware, and it is challenging because neither the correspondence nor the transformation parameters are known. Traditional local PCReg methods (e.g., ICP) rely on local optimization algorithms, which can get trapped in bad local minima in the presence of noise, outliers, bad initializations, etc. To alleviate these issues, this paper proposes Inverse Composition Discriminative Optimization (ICDO), an extension of Discriminative Optimization (DO), which learns a sequence of update steps from synthetic training data that search the parameter space for an improved solution. Unlike DO, ICDO is object-independent and generalizes even to unseen shapes. We evaluated ICDO on both synthetic and real data, and show that ICDO can match the speed and outperform the accuracy of state-of-the-art PCReg algorithms.", "title": "" }, { "docid": "d1bd5406b31cec137860a73b203d6bef", "text": "A chemical-mechanical planarization (CMP) model based on lubrication theory is developed which accounts for pad compressibility, pad porosity and means of slurry delivery. Slurry ®lm thickness and velocity distributions between the pad and the wafer are predicted using the model. Two regimes of CMP operation are described: the lubrication regime (for ,40±70 mm slurry ®lm thickness) and the contact regime (for thinner ®lms). These regimes are identi®ed for two different pads using experimental copper CMP data and the predictions of the model. The removal rate correlation based on lubrication and mass transport theory agrees well with our experimental data in the lubrication regime. q 2000 Elsevier Science S.A. All rights reserved.", "title": "" } ]
scidocsrr
d5d1c9cbefdd02f1f567aa0bd15db3fd
Video game play, attention, and learning: how to shape the development of attention and influence learning?
[ { "docid": "af7803b0061e75659f718d56ba9715b3", "text": "An emerging body of multidisciplinary literature has documented the beneficial influence of physical activity engendered through aerobic exercise on selective aspects of brain function. Human and non-human animal studies have shown that aerobic exercise can improve a number of aspects of cognition and performance. Lack of physical activity, particularly among children in the developed world, is one of the major causes of obesity. Exercise might not only help to improve their physical health, but might also improve their academic performance. This article examines the positive effects of aerobic physical activity on cognition and brain function, at the molecular, cellular, systems and behavioural levels. A growing number of studies support the idea that physical exercise is a lifestyle factor that might lead to increased physical and mental health throughout life.", "title": "" } ]
[ { "docid": "133eccbb62434ad3444962dcf091226c", "text": "We propose a novel multi-sensor system for accurate and power-efficient dynamic car-driver hand-gesture recognition, using a short-range radar, a color camera, and a depth camera, which together make the system robust against variable lighting conditions. We present a procedure to jointly calibrate the radar and depth sensors. We employ convolutional deep neural networks to fuse data from multiple sensors and to classify the gestures. Our algorithm accurately recognizes 10 different gestures acquired indoors and outdoors in a car during the day and at night. It consumes significantly less power than purely vision-based systems.", "title": "" }, { "docid": "abcb9b8feb996917df2dcbd85dbeaff4", "text": "Nearly all aspects of modern life are in some way being changed by big data and machine learning. Netflix knows what movies people like to watch and Google knows what people want to know based on their search histories. Indeed, Google has recently begun to replace much of its existing non–machine learning technology with machine learning algorithms, and there is great optimism that these techniques can provide similar improvements across many sectors. It isnosurprisethenthatmedicineisawashwithclaims of revolution from the application of machine learning to big health care data. Recent examples have demonstrated that big data and machine learning can create algorithms that perform on par with human physicians.1 Though machine learning and big data may seem mysterious at first, they are in fact deeply related to traditional statistical models that are recognizable to most clinicians. It is our hope that elucidating these connections will demystify these techniques and provide a set of reasonable expectations for the role of machine learning and big data in health care. Machine learning was originally described as a program that learns to perform a task or make a decision automatically from data, rather than having the behavior explicitlyprogrammed.However,thisdefinitionisverybroad and could cover nearly any form of data-driven approach. For instance, consider the Framingham cardiovascular risk score,whichassignspointstovariousfactorsandproduces a number that predicts 10-year cardiovascular risk. Should this be considered an example of machine learning? The answer might obviously seem to be no. Closer inspection oftheFraminghamriskscorerevealsthattheanswermight not be as obvious as it first seems. The score was originally created2 by fitting a proportional hazards model to data frommorethan5300patients,andsothe“rule”wasinfact learnedentirelyfromdata.Designatingariskscoreasamachine learning algorithm might seem a strange notion, but this example reveals the uncertain nature of the original definition of machine learning. It is perhaps more useful to imagine an algorithm as existing along a continuum between fully human-guided vs fully machine-guided data analysis. To understand the degree to which a predictive or diagnostic algorithm can said to be an instance of machine learning requires understanding how much of its structure or parameters were predetermined by humans. The trade-off between human specificationofapredictivealgorithm’spropertiesvslearning those properties from data is what is known as the machine learning spectrum. Returning to the Framingham study, to create the original risk score statisticians and clinical experts worked together to make many important decisions, such as which variables to include in the model, therelationshipbetweenthedependentandindependent variables, and variable transformations and interactions. Since considerable human effort was used to define these properties, it would place low on the machine learning spectrum (#19 in the Figure and Supplement). Many evidence-based clinical practices are based on a statistical model of this sort, and so many clinical decisions in fact exist on the machine learning spectrum (middle left of Figure). On the extreme low end of the machine learning spectrum would be heuristics and rules of thumb that do not directly involve the use of any rules or models explicitly derived from data (bottom left of Figure). Suppose a new cardiovascular risk score is created that includes possible extensions to the original model. For example, it could be that risk factors should not be added but instead should be multiplied or divided, or perhaps a particularly important risk factor should square the entire score if it is present. Moreover, if it is not known in advance which variables will be important, but thousands of individual measurements have been collected, how should a good model be identified from among the infinite possibilities? This is precisely what a machine learning algorithm attempts to do. As humans impose fewer assumptions on the algorithm, it moves further up the machine learning spectrum. However, there is never a specific threshold wherein a model suddenly becomes “machine learning”; rather, all of these approaches exist along a continuum, determined by how many human assumptions are placed onto the algorithm. An example of an approach high on the machine learning spectrum has recently emerged in the form of so-called deep learning models. Deep learning models are stunningly complex networks of artificial neurons that were designed expressly to create accurate models directly from raw data. Researchers recently demonstrated a deep learning algorithm capable of detecting diabetic retinopathy (#4 in the Figure, top center) from retinal photographs at a sensitivity equal to or greater than that of ophthalmologists.1 This model learned the diagnosis procedure directly from the raw pixels of the images with no human intervention outside of a team of ophthalmologists who annotated each image with the correct diagnosis. Because they are able to learn the task with little human instruction or prior assumptions, these deep learning algorithms rank very high on the machine learning spectrum (Figure, light blue circles). Though they require less human guidance, deep learning algorithms for image recognition require enormous amounts of data to capture the full complexity, variety, and nuance inherent to real-world images. Consequently, these algorithms often require hundreds of thousands of examples to extract the salient image features that are correlated with the outcome of interest. Higher placement on the machine learning spectrum does not imply superiority, because different tasks require different levels of human involvement. While algorithms high on the spectrum are often very flexible and can learn many tasks, they are often uninterpretable VIEWPOINT", "title": "" }, { "docid": "cd407caad37c33ee5540b079e94782c7", "text": "Despite the remarkable recent progress, person reidentification (Re-ID) approaches are still suffering from the failure cases where the discriminative body parts are missing. To mitigate such cases, we propose a simple yet effective Horizontal Pyramid Matching (HPM) approach to fully exploit various partial information of a given person, so that correct person candidates can be still identified even even some key parts are missing. Within the HPM, we make the following contributions to produce a more robust feature representation for the Re-ID task: 1) we learn to classify using partial feature representations at different horizontal pyramid scales, which successfully enhance the discriminative capabilities of various person parts; 2) we exploit average and max pooling strategies to account for person-specific discriminative information in a global-local manner. To validate the effectiveness of the proposed HPM, extensive experiments are conducted on three popular benchmarks, including Market-1501, DukeMTMC-ReID and CUHK03. In particular, we achieve mAP scores of 83.1%, 74.5% and 59.7% on these benchmarks, which are the new state-of-the-arts. Our code is available on Github .", "title": "" }, { "docid": "b5453d9e4385d5a5ff77997ad7e3f4f0", "text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "title": "" }, { "docid": "d59f6325233544b2deaaa60b8743312a", "text": "Printed documents are vulnerable to forgery through the latest technology development and it becomes extremely important. Most of the forgeries can be resulting loss of personal identity or ownership of a certain valuable object. This paper proposes novel authentication technique and schema for printed document authentication using watermarked QR (Quick Response) code. The technique is based Watermarked QR code generated with embedding logo belongs to the owner of the document which contain validation link, and the schema is checking the validation link of the printed document which linked to the web server and database server through internet connection by scanning it over camera phone and QR code reader, the result from this technique and schema is the validation can be done in real-time using smart phone such as smart phone based Android, Black Berry, and iOS. To get a good performance in extracting and validating printed document, it can be done by preparing in advance the validation link via internet connection to get the authentication of information hidden. Finally, this paper provide experimental results to demonstrate the authenticated of printed documents using watermarked QR code.", "title": "" }, { "docid": "fd9e8a79decf68721fb0dd81f16a5f8b", "text": "Feeder reconfiguration (FRC) is an important function of distribution automation system. It modifies the topology of distribution network through changing the open/close statuses of tie switches and sectionalizing switches. The change of topology redirects the power flow within the distribution network, in order to obtain a better performance of the system. Various methods have been explored to solve FRC problems. This paper presents a literature survey on distribution system FRC. Among many aspects to be reviewed for a comprehensive study, this paper focuses on FRC objectives and solution methods. The problem definition of FRC is first discussed, the objectives are summarized, and various solution methods are categorized and evaluated.", "title": "" }, { "docid": "31d7a6da7093d50d0d5890cce4cb60cf", "text": "We introduce a novel Gaussian process based Bayesian model for asymmetric transfer learning. We adopt a two-layer feed-forward deep Gaussian process as the task learner of source and target domains. The first layer projects the data onto a separate non-linear manifold for each task. We perform knowledge transfer by projecting the target data also onto the source domain and linearly combining its representations on the source and target domain manifolds. Our approach achieves the state-of-the-art in a benchmark real-world image categorization task, and improves on it in cross-tissue tumor detection from histopathology tissue slide images.", "title": "" }, { "docid": "05eb1af3e6838640b6dc5c1c128cc78a", "text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.", "title": "" }, { "docid": "9e42d25a6a7bc3aad562e58721c7650d", "text": "The purpose of this retrospective study was to illustrate the differences in maternal and paternal filicides in Finland during a 25-year period. In the sample of 200 filicides [neonaticides (n = 56), filicide-suicides (n = 75), other filicides (n = 69)], the incidence was 5.09 deaths per 100,000 live births: 59 percent of filicides were committed by mothers, 39 percent by fathers, and 2 percent by stepfathers. The mean age of the maternal victims (1.6 y) was significantly lower than that of the paternal victims (5.6 y), but no correlation between the sex of the victim and the sex of the perpetrator was found, and the number of female and male victims was equal. The sample of other filicides (n = 65) was studied more closely by forensic psychiatric examination and review of collateral files. Filicidal mothers showed mental distress and often had psychosocial stressors of marital discord and lack of support. They often killed for altruistic reasons and in association with suicide. Maternal perpetrators also dominated in filicide cases in which death was caused by a single episode or recurrent episodes of battering. Psychosis and psychotic depression were diagnosed in 51 percent of the maternal perpetrators, and 76 percent of the mothers were deemed not responsible for their actions by reason of insanity. Paternal perpetrators, on the other hand, were jealous of their mates, had a personality disorder (67%), abused alcohol (45%), or were violent toward their mates. In 18 percent of the cases, they were not held responsible for their actions by reason of insanity. During childhood, most of the perpetrators had endured emotional abuse from their parents or guardians, some of whom also engaged in alcohol abuse and domestic violence. The purpose of this study was to examine the differences between maternal and paternal filicides in a sample of 200 cases in Finland. This report also provides a psychosocial profile of the perpetrator and victim in 65 filicides and a discussion of the influence of diagnoses on decisions regarding criminal responsibility.", "title": "" }, { "docid": "0ca477c017da24940bb5af79b2c8826a", "text": "Code comprehension is critical in software maintenance. Towards providing tools and approaches to support maintenance tasks, researchers have investigated various research lines related to how software code can be described in an abstract form. So far, studies on change pattern mining, code clone detection, or semantic patch inference have mainly adopted text-, tokenand tree-based representations as the basis for computing similarity among code fragments. Although, in general, existing techniques form clusters of “similar” code, our experience in patch mining has revealed that clusters of patches formed by such techniques do not usually carry explainable semantics that can be associated to bug-fixing patterns. In this paper, we propose a novel, automated approach for mining semantically-relevant fix patterns based on an iterative, three-fold, clustering strategy. Our technique, FixMiner, leverages different tree representations for each round of clustering: the Abstract syntax tree, the edit actions tree, and the code context tree. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in AST diff trees. Eventually, FixMiner yields patterns which can be associated to the semantics of the bugs that the associated patches address. We further leverage the mined patterns to implement an automated program repair pipeline with which we are able to correctly fix 25 bugs from the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 80% of FixMiner’s A. Koyuncu, K. Liu, T. F. Bissyandé, D. Kim, J. Klein, K. Kim, and Y. Le Traon SnT, University of Luxembourg E-mail: {firstname.lastname}@uni.lu M. Monperrus KTH Royal Institute of Technology E-mail: martin.monperrus@csc.kth.se ar X iv :1 81 0. 01 79 1v 1 [ cs .S E ] 3 O ct 2 01 8 2 Anil Koyuncu et al. generated plausible patches are correct, while the closest related works, namely HDRepair and SimFix, achieve respectively 26% and 70% of correctness.", "title": "" }, { "docid": "c6645086397ba0825f5f283ba5441cbf", "text": "Anomalies have broad patterns corresponding to their causes. In industry, anomalies are typically observed as equipment failures. Anomaly detection aims to detect such failures as anomalies. Although this is usually a binary classification task, the potential existence of unseen (unknown) failures makes this task difficult. Conventional supervised approaches are suitable for detecting seen anomalies but not for unseen anomalies. Although, unsupervised neural networks for anomaly detection now detect unseen anomalies well, they cannot utilize anomalous data for detecting seen anomalies even if some data have been made available. Thus, providing an anomaly detector that finds both seen and unseen anomalies well is still a tough problem. In this paper, we introduce a novel probabilistic representation of anomalies to solve this problem. The proposed model defines the normal and anomaly distributions using the analogy between a set and the complementary set. We applied these distributions to an unsupervised variational autoencoder (VAE)-based method and turned it into a supervised VAE-based method. We tested the proposed method with well-known data and real industrial data to show that the proposed method detects seen anomalies better than the conventional unsupervised method without degrading the detection performance for unseen anomalies.", "title": "" }, { "docid": "0159d630fc310d32dc76fd88edac49ef", "text": "We consider variants on the Prize Collecting Steiner Tree problem and on the primal-dual 2-approximation algorithm devised for it by Goemans and Williamson. We introduce an improved pruning rule for the algorithm that is slightly faster and provides solutions that are at least as good and typically significantly better. On a selection of real-world instances whose underlying graphs are county street maps, the improvement in the standard objective function ranges from 1.7% to 9.2%. Substantially better improvements are obtained for the complementary \"net worth\" objective function and for randomly generated instances. We also show that modifying the growth phase of the GoemansWilliamson algorithm to make it independent of the choice of root vertex does not significantly affect the algorithm's worst-case guarantee or behavior in practice. The resulting algorithm can be fttrther modified so that, without an increase in running time, it becomes a 2-approximation algorithm for finding the best subtree over all choices of root. In the second part of the paper, we consider quota and budget versions of the problem. In the first, one is looking for the tree with minimum edge cost that contains vertices whose total prize is at least a given quota; in the second one is looking for the tree with maximum prize, given that the total edge cost is within a given budget. The quota problem is a generalization of the k-MST problem, and we observe how constant-factor approximation algorithms for that problem can be extended to it. We also show how a (5 ÷ e)approximation algorithm for the (unrooted) budget problem can be derived from Gaxg's 3-approximation algorithm for the k-MST. None of these algorithms are likely to be used in ~ ractice, but we show how the general approach behind them which involves performing multiple runs of the GoemansWiUiamson algorithm using an increasing sequence of prizemultipliers) can be incorporated into a practical heuristic. We also uncover some surprising properties of the cost/prize tradeoff curves generated (and used) by this approach. 1 Prob lem Def in i t ions In the Prize Collecting Steiner Tree\" (PCST) problem, one is given a graph G = (V, E) , a non-negative edge cost c(e) for each edge e 6 E , a non-negative vertex prize p(v) for each vertex v 6 V, and a specified root vertex vo 6 V. In this paper we shall consider four different optimization problems based on this scenario, the first being the one initially studied in [6, 7]: \" ¢ / ~ T Labs, Room C239, 180 Park Avenue, Florham Park, NJ 07932. Email: dsj@research.att.com ?MIT Lab. for Computer Science, 545 Tech Square, Cambridge, MA 02139. Emaih mariam@theory.lcs.mit.edu SAT&T Labs, Room A003, 180 Park Avenue, Florham Park, NJ 07932. Emaih phillips@reseoxch.att.com Steven Phillips 1. The Goemans-WiUiamson Minimization problem: Find a subtree T ' = (V',E') of G tha t minimizes the cost of the edges in the tree plus the prizes of the vertices not in the tree, i.e., tha t minimizes GW(T') = Z c(e) + Z p(v)", "title": "" }, { "docid": "0e30b5ffa34b9a065130688f0b7e44da", "text": "This brief presents a new technique for minimizing reference spurs in a charge-pump phase-locked loop (PLL) while maintaining dead-zone-free operation. The proposed circuitry uses a phase/frequency detector with a variable delay element in its reset path, with the delay length controlled by feedback from the charge-pump. Simulations have been performed with several PLLs to compare the proposed circuitry with previously reported techniques. The proposed approach shows improvements over previously reported techniques of 12 and 16 dB in the two closest reference spurs", "title": "" }, { "docid": "a4e92e4dc5d93aec4414bc650436c522", "text": "Where you can find the compiling with continuations easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this compiling with continuations book. It is about this book that will give wellness for all people from many societies.", "title": "" }, { "docid": "b60416c661e1f9c292555955965c7f01", "text": "A 4.9-6.4-Gb/s two-level SerDes ASIC I/O core employing a four-tap feed-forward equalizer (FFE) in the transmitter and a five-tap decision-feedback equalizer (DFE) in the receiver has been designed in 0.13-/spl mu/m CMOS. The transmitter features a total jitter (TJ) of 35 ps p-p at 10/sup -12/ bit error rate (BER) and can output up to 1200 mVppd into a 100-/spl Omega/ differential load. Low jitter is achieved through the use of an LC-tank-based VCO/PLL system that achieves a typical random jitter of 0.6 ps over a phase noise integration range from 6 MHz to 3.2 GHz. The receiver features a variable-gain amplifier (VGA) with gain ranging from -6to +10dB in /spl sim/1dB steps, an analog peaking amplifier, and a continuously adapted DFE-based data slicer that uses a hybrid speculative/dynamic feedback architecture optimized for high-speed operation. The receiver system is designed to operate with a signal level ranging from 50 to 1200 mVppd. Error-free operation of the system has been demonstrated on lossy transmission line channels with over 32-dB loss at the Nyquist (1/2 Bd rate) frequency. The Tx/Rx pair with amortized PLL power consumes 290 mW of power from a 1.2-V supply while driving 600 mVppd and uses a die area of 0.79 mm/sup 2/.", "title": "" }, { "docid": "7d7c596d334153f11098d9562753a1ee", "text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.", "title": "" }, { "docid": "baaf84ec42f3624cb949f37b5cab83e8", "text": "In this paper, we propose a practical method for user grouping and decoding-order setting in a successive interference canceller (SIC) for downlink non-orthogonal multiple access (NOMA). While the optimal user grouping and decoding order, which depend on the instantaneous channel conditions among users within a cell, are assumed in previous work, the proposed method uses user grouping and a decoding order that are unified among all frequency blocks. The proposed decoding order in the SIC enables the application of NOMA with a SIC to a system where all the elements within a codeword for a user are distributed among multiple frequency blocks (resource blocks). The unified user grouping eases the complexity in the SIC process at the user terminal. The unified user grouping also reduces the complexity of the efficient downlink control signaling in NOMA with a SIC. The unified user grouping and decoding order among frequency blocks in principle reduce the achievable throughput compared to the optimal one. However, based on numerical results, we show that the proposed method does not significantly degrade the system-level throughput in downlink cellular networks.", "title": "" }, { "docid": "1f770561b6f535e36dfb5e43326780a5", "text": "The Red Brick WarehouseTMis a commercial Relational Database Management System designed specifically for query, decision support, and data warehouse applications. Red Brick Warehouse is a software-only system providing ANSI SQL support in an open cliendserver environment. Red Brick Warehouse is distinguished from traditional RDBMS products by an architecture optimized to deliver high performance in read-mostly, high-intensity query applications. In these applications, the workload is heavily biased toward complex SQL SELECT operations that read but do not update the database. The average unit of work is very large, and typically involves multi-table joins, aggregation, duplicate elimination, and sorting. Multi-user concurrency is moderate, with typical systems supporting 50 to 500 concurrent user sessions. Query databases are often very large, with tables ranging from 100 million to many billion rows and occupying 50 Gigabytes to 2 Terabytes, Databases are populated by massive bulk-load operations on an hourly, daily, or weekly cycle. Time-series and historical data are maintained for months or years. Red Brick Warehouse makes use of parallel processing as well as other specialized algorithms to achieve outstanding performance and scalability on cost-effective hardware platforms.", "title": "" }, { "docid": "745562de56499ff0030f35afa8d84b7f", "text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.", "title": "" }, { "docid": "3a798fac488b605c145d3ce171f4dcba", "text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Discrimination in credit, mortgage, insurance, labor market, and education has been investigated by researchers in economics and human sciences. With the advent of automatic decision support systems, such as credit scoring systems, the ease of data collection opens several challenges to data analysts for the fight against discrimination. In this article, we introduce the problem of discovering discrimination through data mining in a dataset of historical decision records, taken by humans or by automatic systems. We formalize the processes of direct and indirect discrimination discovery by modelling protected-by-law groups and contexts where discrimination occurs in a classification rule based syntax. Basically, classification rules extracted from the dataset allow for unveiling contexts of unlawful discrimination, where the degree of burden over protected-by-law groups is formalized by an extension of the lift measure of a classification rule. In direct discrimination, the extracted rules can be directly mined in search of discriminatory contexts. In indirect discrimination, the mining process needs some background knowledge as a further input, for example, census data, that combined with the extracted rules might allow for unveiling contexts of discriminatory decisions. A strategy adopted for combining extracted classification rules with background knowledge is called an inference model. In this article, we propose two inference models and provide automatic procedures for their implementation. An empirical assessment of our results is provided on the German credit dataset and on the PKDD Discovery Challenge 1999 financial dataset.", "title": "" } ]
scidocsrr
fe9da71b07b45bef1a7f551e5e0a1f17
Confidence and certainty: distinct probabilistic quantities for different goals
[ { "docid": "50a89110795314b5610fabeaf41f0e40", "text": "People are capable of robust evaluations of their decisions: they are often aware of their mistakes even without explicit feedback, and report levels of confidence in their decisions that correlate with objective performance. These metacognitive abilities help people to avoid making the same mistakes twice, and to avoid overcommitting time or resources to decisions that are based on unreliable evidence. In this review, we consider progress in characterizing the neural and mechanistic basis of these related aspects of metacognition-confidence judgements and error monitoring-and identify crucial points of convergence between methods and theories in the two fields. This convergence suggests that common principles govern metacognitive judgements of confidence and accuracy; in particular, a shared reliance on post-decisional processing within the systems responsible for the initial decision. However, research in both fields has focused rather narrowly on simple, discrete decisions-reflecting the correspondingly restricted focus of current models of the decision process itself-raising doubts about the degree to which discovered principles will scale up to explain metacognitive evaluation of real-world decisions and actions that are fluid, temporally extended, and embedded in the broader context of evolving behavioural goals.", "title": "" } ]
[ { "docid": "e3ccebbfb328e525c298816950d135a5", "text": "It is important for robots to be able to decide whether they can go through a space or not, as they navigate through a dynamic environment. This capability can help them avoid injury or serious damage, e.g., as a result of running into people and obstacles, getting stuck, or falling off an edge. To this end, we propose an unsupervised and a near-unsupervised method based on Generative Adversarial Networks (GAN) to classify scenarios as traversable or not based on visual data. Our method is inspired by the recent success of data-driven approaches on computer vision problems and anomaly detection, and reduces the need for vast amounts of negative examples at training time. Collecting negative data indicating that a robot should not go through a space is typically hard and dangerous because of collisions; whereas collecting positive data can be automated and done safely based on the robot’s own traveling experience. We verify the generality and effectiveness of the proposed approach on a test dataset collected in a previously unseen environment with a mobile robot. Furthermore, we show that our method can be used to build costmaps (we call as ”GoNoGo” costmaps) for robot path planning using visual data only.", "title": "" }, { "docid": "0a2810ea169fac476c0ffe1f3d163c95", "text": "BACKGROUND\nAntidepressant treatment efficacy is low, but might be improved by matching patients to interventions. At present, clinicians have no empirically validated mechanisms to assess whether a patient with depression will respond to a specific antidepressant. We aimed to develop an algorithm to assess whether patients will achieve symptomatic remission from a 12-week course of citalopram.\n\n\nMETHODS\nWe used patient-reported data from patients with depression (n=4041, with 1949 completers) from level 1 of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D; ClinicalTrials.gov, number NCT00021528) to identify variables that were most predictive of treatment outcome, and used these variables to train a machine-learning model to predict clinical remission. We externally validated the model in the escitalopram treatment group (n=151) of an independent clinical trial (Combining Medications to Enhance Depression Outcomes [COMED]; ClinicalTrials.gov, number NCT00590863).\n\n\nFINDINGS\nWe identified 25 variables that were most predictive of treatment outcome from 164 patient-reportable variables, and used these to train the model. The model was internally cross-validated, and predicted outcomes in the STAR*D cohort with accuracy significantly above chance (64·6% [SD 3·2]; p<0·0001). The model was externally validated in the escitalopram treatment group (N=151) of COMED (accuracy 59·6%, p=0.043). The model also performed significantly above chance in a combined escitalopram-buproprion treatment group in COMED (n=134; accuracy 59·7%, p=0·023), but not in a combined venlafaxine-mirtazapine group (n=140; accuracy 51·4%, p=0·53), suggesting specificity of the model to underlying mechanisms.\n\n\nINTERPRETATION\nBuilding statistical models by mining existing clinical trial data can enable prospective identification of patients who are likely to respond to a specific antidepressant.\n\n\nFUNDING\nYale University.", "title": "" }, { "docid": "4c2b13b00ce3c92762fa9bfbd34dd0a0", "text": "Technology advances in the areas of Image processing IP and Information Retrieval IR have evolved separately for a long time However successful content based image retrieval systems require the integration of the two There is an urgent need to develop integration mechanisms to link the image retrieval model to text retrieval model such that the well established text retrieval techniques can be utilized Approaches of converting image feature vectors IP do main to weighted term vectors IR domain are proposed in this paper Furthermore the relevance feedback technique from the IR domain is used in content based image retrieval to demonstrate the e ectiveness of this conversion Exper imental results show that the image retrieval precision in creases considerably by using the proposed integration ap proach", "title": "" }, { "docid": "84c2b96916ce68245cf81bdf8f4b435c", "text": "INTRODUCTION\nComplete and accurate coding of injury causes is essential to the understanding of injury etiology and to the development and evaluation of injury-prevention strategies. While civilian hospitals use ICD-9-CM external cause-of-injury codes, military hospitals use codes derived from the NATO Standardization Agreement (STANAG) 2050.\n\n\nDISCUSSION\nThe STANAG uses two separate variables to code injury cause. The Trauma code uses a single digit with 10 possible values to identify the general class of injury as battle injury, intentionally inflicted nonbattle injury, or unintentional injury. The Injury code is used to identify cause or activity at the time of the injury. For a subset of the Injury codes, the last digit is modified to indicate place of occurrence. This simple system contains fewer than 300 basic codes, including many that are specific to battle- and sports-related injuries not coded well by either the ICD-9-CM or the draft ICD-10-CM. However, while falls, poisonings, and injuries due to machinery and tools are common causes of injury hospitalizations in the military, few STANAG codes correspond to these events. Intentional injuries in general and sexual assaults in particular are also not well represented in the STANAG. Because the STANAG does not map directly to the ICD-9-CM system, quantitative comparisons between military and civilian data are difficult.\n\n\nCONCLUSIONS\nThe ICD-10-CM, which will be implemented in the United States sometime after 2001, expands considerably on its predecessor, ICD-9-CM, and provides more specificity and detail than the STANAG. With slight modification, it might become a suitable replacement for the STANAG.", "title": "" }, { "docid": "4c05d5add4bd2130787fd894ce74323a", "text": "Although semi-supervised model can extract the event mentions matching frequent event patterns, it suffers much from those event mentions, which match infrequent patterns or have no matching pattern. To solve this issue, this paper introduces various kinds of linguistic knowledge-driven event inference mechanisms to semi-supervised Chinese event extraction. These event inference mechanisms can capture linguistic knowledge from four aspects, i.e. semantics of argument role, compositional semantics of trigger, consistency on coreference events and relevant events, to further recover missing event mentions from unlabeled texts. Evaluation on the ACE 2005 Chinese corpus shows that our event inference mechanisms significantly outperform the refined state-of-the-art semi-supervised Chinese event extraction system in F1-score by 8.5%.", "title": "" }, { "docid": "5a99af400ea048d34ee961ad7f3e3bf6", "text": "Breast cancer is becoming pervasive with each passing day. Hence, its early detection is a big step in saving life of any patient. Mammography is a common tool in breast cancer diagnosis. The most important step here is classification of mammogram patches as normal-abnormal and benign-malignant. Texture of a breast in a mammogram patch plays a big role in these classifications. We propose a new feature extraction descriptor called Histogram of Oriented Texture (HOT), which is a combination of Histogram of Gradients (HOG) and a Gabor filter, and exploits this fact. We also revisit the Pass Band Discrete Cosine Transform (PB-DCT) descriptor that captures texture information well. All features of a mammogram patch may not be useful. Hence, we apply a feature selection technique called Discrimination Potentiality (DP). Our resulting descriptors, DP-HOT and DP-PB-DCT, are compared with the standard descriptors. Density of a mammogram patch is important for classification, and has not been studied exhaustively. The Image Retrieval in Medical Application (IRMA) database from RWTH Aachen, Germany is a standard database that provides mammogram patches, and most researchers have tested their frameworks only on a subset of patches from this database. We apply our two new descriptors on all images of the IRMA database for density wise classification, and compare with the standard descriptors. We achieve higher accuracy than all of the existing standard descriptors (more than 92% ).", "title": "" }, { "docid": "62f5640954e5b731f82599fb52ea816f", "text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.", "title": "" }, { "docid": "1ff8d3270f4884ca9a9c3d875bdf1227", "text": "This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded/ invisible surfaces.", "title": "" }, { "docid": "0016ef3439b78a29c76a14e8db2a09be", "text": "In tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, and test them together in the common task. In this paper, such a method, called multiagent enforced subpopulations (multiagent ESP), is proposed and demonstrated in a prey-capture task. First, the approach is shown to be more efficient than evolving a single central controller for all agents. Second, cooperation is found to be most efficient through stigmergy, i.e., through role-based responses to the environment, rather than communication between the agents. Together these results suggest that role-based cooperation is an effective strategy in certain multiagent tasks.", "title": "" }, { "docid": "36fdd31b04f53f7aef27b9d4af5f479f", "text": "Smart meters have been deployed in many countries across the world since early 2000s. The smart meter as a key element for the smart grid is expected to provide economic, social, and environmental benefits for multiple stakeholders. There has been much debate over the real values of smart meters. One of the key factors that will determine the success of smart meters is smart meter data analytics, which deals with data acquisition, transmission, processing, and interpretation that bring benefits to all stakeholders. This paper presents a comprehensive survey of smart electricity meters and their utilization focusing on key aspects of the metering process, different stakeholder interests, and the technologies used to satisfy stakeholder interests. Furthermore, the paper highlights challenges as well as opportunities arising due to the advent of big data and the increasing popularity of cloud environments.", "title": "" }, { "docid": "a2f8cb66e02e87861a322ce50fef97af", "text": "The conversion of biomass by gasification into a fuel suitable for use in a gas engine increases greatly the potential usefulness of biomass as a renewable resource. Gasification is a robust proven technology that can be operated either as a simple, low technology system based on a fixed-bed gasifier, or as a more sophisticated system using fluidized-bed technology. The properties of the biomass feedstock and its preparation are key design parameters when selecting the gasifier system. Electricity generation using a gas engine operating on gas produced by the gasification of biomass is applicable equally to both the developed world (as a means of reducing greenhouse gas emissions by replacing fossil fuel) and to the developing world (by providing electricity in rural areas derived from traditional biomass).", "title": "" }, { "docid": "bc892fe2a369f701e0338085eaa0bdbd", "text": "In his In the blink of an eye,Walter Murch, the Oscar-awarded editor of the English Patient, Apocalypse Now, and many other outstanding movies, devises the Rule of Six—six criteria for what makes a good cut. On top of his list is \"to be true to the emotion of the moment,\" a quality more important than advancing the story or being rhythmically interesting. The cut has to deliver a meaningful, compelling, and emotion-rich \"experience\" to the audience. Because, \"what they finally remember is not the editing, not the camerawork, not the performances, not even the story—it’s how they felt.\" Technology for all the right reasons applies this insight to the design of interactive products and technologies—the domain of Human-Computer Interaction,Usability Engineering,and Interaction Design. It takes an experiential approach, putting experience before functionality and leaving behind oversimplified calls for ease, efficiency, and automation or shallow beautification. Instead, it explores what really matters to humans and what it needs to make technology more meaningful. The book clarifies what experience is, and highlights five crucial aspects and their implications for the design of interactive products. It provides reasons why we should bother with an experiential approach, and presents a detailed working model of experience useful for practitioners and academics alike. It closes with the particular challenges of an experiential approach for design. The book presents its view as a comprehensive, yet entertaining blend of scientific findings, design examples, and personal anecdotes.", "title": "" }, { "docid": "2c7bafac9d4c4fedc43982bd53c99228", "text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.", "title": "" }, { "docid": "8e00a3e7a07b69bce89a66fc6d4934aa", "text": "This article is organised in five main sections. First, the sub-area of task-based instruction is introduced and contextualised. Its origins within communicative language teaching and second language acquisition research are sketched, and the notion of a task in language learning is defined. There is also brief coverage of the different and sometimes contrasting groups who are interested in the use of tasks. The second section surveys research into tasks, covering the different perspectives (interactional, cognitive) which have been influential. Then a third section explores how performance on tasks has been measured, generally in terms of how complex the language used is, how accurate it is, and how fluent. There is also discussion of approaches to measuring interaction. A fourth section explores the pedagogic and interventionist dimension of the use of tasks. The article concludes with a survey of the various critiques of tasks that have been made in recent years.", "title": "" }, { "docid": "fe0fa94ce6f02626fca12f21b60bec46", "text": "Solid waste management (SWM) is a major public health and environmental concern in urban areas of many developing countries. Nairobi’s solid waste situation, which could be taken to generally represent the status which is largely characterized by low coverage of solid waste collection, pollution from uncontrolled dumping of waste, inefficient public services, unregulated and uncoordinated private sector and lack of key solid waste management infrastructure. This paper recapitulates on the public-private partnership as the best system for developing countries; challenges, approaches, practices or systems of SWM, and outcomes or advantages to the approach; the literature review focuses on surveying information pertaining to existing waste management methodologies, policies, and research relevant to the SWM. Information was sourced from peer-reviewed academic literature, grey literature, publicly available waste management plans, and through consultation with waste management professionals. Literature pertaining to SWM and municipal solid waste minimization, auditing and management were searched for through online journal databases, particularly Web of Science, and Science Direct. Legislation pertaining to waste management was also researched using the different databases. Additional information was obtained from grey literature and textbooks pertaining to waste management topics. After conducting preliminary research, prevalent references of select sources were identified and scanned for additional relevant articles. Research was also expanded to include literature pertaining to recycling, composting, education, and case studies; the manuscript summarizes with future recommendationsin terms collaborations of public/ private patternships, sensitization of people, privatization is important in improving processes and modernizing urban waste management, contract private sector, integrated waste management should be encouraged, provisional government leaders need to alter their mind set, prepare a strategic, integrated SWM plan for the cities, enact strong and adequate legislation at city and national level, evaluate the real impacts of waste management systems, utilizing locally based solutions for SWM service delivery and design, location, management of the waste collection centersand recycling and compositing activities should be", "title": "" }, { "docid": "ad4949d61aecf488fffcc4ca25ca0fb7", "text": "Predicting the gender of users in social media has aroused great interests in recent years. Almost all existing studies rely on the the content features extracted from the main texts like tweets or reviews. It is sometimes difficult to extract content information since many users do not write any posts at all. In this paper, we present a novel framework which uses only the users' ids and their social contexts for gender prediction. The key idea is to represent users in the embedding connection space. A user often has the social context of family members, schoolmates, colleagues, and friends. This is similar to a word and its contexts in documents, which motivates our study. However, when modifying the word embedding technique for user embedding, there are two major challenges. First, unlike the syntax in language, no rule is responsible for the composition of the social contexts. Second, new users were not seen when learning the representations and thus they do not have embedding vectors. Two strategies circular ordering and incremental updating are proposed to solve these problems. We evaluate our methodology on two real data sets. Experimental results demonstrate that our proposed approach is significantly better than the traditional graph representation and the state-of-the-art graph embedding baselines. It also outperforms the content based approaches by a large margin.", "title": "" }, { "docid": "245b313fa0a72707949f20c28ce7e284", "text": "We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) which preserves the computational simplicity of ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA.", "title": "" }, { "docid": "f95f77f81f5a4838f9f3fa2538e9d132", "text": "Learning analytics tools should be useful, i.e., they should be usable and provide the functionality for reaching the goals attributed to learning analytics. This paper seeks to unite learning analytics and action research. Based on this, we investigate how the multitude of questions that arise during technology-enhanced teaching and learning systematically can be mapped to sets of indicators. We examine, which questions are not yet supported and propose concepts of indicators that have a high potential of positively influencing teachers' didactical considerations. Our investigation shows that many questions of teachers cannot be answered with currently available research tools. Furthermore, few learning analytics studies report about measuring impact. We describe which effects learning analytics should have on teaching and discuss how this could be evaluated.", "title": "" }, { "docid": "d9b2cb1a7abdadfad4caeb3598a58e68", "text": "A highly efficient planar integrated magnetic (PIM) design approach for primary-parallel isolated boost converters is presented. All magnetic components in the converter, including two input inductors and two transformers with primary-parallel and secondary-series windings, are integrated into an E-I-E-core geometry, reducing the total ferrite volume and core loss. The transformer windings are symmetrically distributed into the outer legs of E-cores, and the inductor windings are wound on the center legs of E-cores with air gaps. Therefore, the inductor and the transformer can be operated independently. Due to the low-reluctance path provided by the shared I-core, the two input inductors can be integrated independently, and also, the two transformers can be partially coupled to each other. Detailed characteristics of the integrated structure have been studied in this paper. AC losses in the windings and the leakage inductance of the transformer are kept low by interleaving the primary and secondary turns of the transformers substantially. Because of the combination of inductors and transformers, the maximum output power capability of the fully integrated module needs to be investigated. Winding loss, core loss, and switching loss of MOSFETs are analyzed in-depth in this work as well. To verify the validity of the design approach, a 2-kW prototype converter with two primary power stages is implemented for fuel-cell-fed traction applications with 20-50-V input and 400-V output. An efficiency of 95.9% can be achieved during 1.5-kW nominal operating conditions. Experimental comparisons between the PIM module and three separated cases have illustrated that the PIM module has advantages of lower footprint and higher efficiencies.", "title": "" } ]
scidocsrr
80be253c6f3f2578e7b8c291ebf98f4b
Recent developments in human gait research: parameters, approaches, applications, machine learning techniques, datasets and challenges
[ { "docid": "c6e0843498747096ebdafd51d4b5cca6", "text": "The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.", "title": "" } ]
[ { "docid": "59dfaac9730e526604193f06b48a9dd5", "text": "We evaluated the functional and oncological outcome of ultralow anterior resection and coloanal anastomosis (CAA), which is a popular technique for preserving anal sphincter in patients with distal rectal cancer. Forty-eight patients were followed up for 6–100 months regarding fecal or gas incontinence, frequency of bowel movement, and local or systemic recurrence. The main operative techniques were total mesorectal excision with autonomic nerve preservation; the type of anastomosis was straight CAA, performed by the perianal hand sewn method in 38 cases and by the double-stapled method in 10. Postoperative complications included transient urinary retention (n=7), anastomotic stenosis (n=3), anastomotic leakage (n=3), rectovaginal fistula (n=2), and cancer positive margin (n=1; patient refused reoperation). Overall there were recurrences in seven patients (14.5%): one local and one systemic recurrence in stage B2; and one local, two systemic, and two combined local and systemic in C2. The mean frequency of bowel movements was 6.1 per day after 3 months, 4.4 after 1 year, and 3.1 after 2 years. The Kirwan grade for fecal incontinence was 2.7 after 3 months, 1.8 after 1 year, and 1.5 after 2 years. With careful selection of patients and good operative technique, CAA can be performed safely in distal rectal cancer. Normal continence and acceptable frequency of bowel movements can be obtained within 1 year after operation without compromising the rate of local recurrence.", "title": "" }, { "docid": "82a40130bc83a2456c8368fa9275c708", "text": "This paper presents a novel strategy for using ant colony optimization (ACO) to evolve the structure of deep recurrent neural networks. While versions of ACO for continuous parameter optimization have been previously used to train the weights of neural networks, to the authors’ knowledge they have not been used to actually design neural networks. The strategy presented is used to evolve deep neural networks with up to 5 hidden and 5 recurrent layers for the challenging task of predicting general aviation flight data, and is shown to provide improvements of 63 % for airspeed, a 97 % for altitude and 120 % for pitch over previously best published results, while at the same time not requiring additional input neurons for residual values. The strategy presented also has many benefits for neuro evolution, including the fact that it is easily parallizable and scalable, and can operate using any method for training neural networks. Further, the networks it evolves can typically be trained in fewer iterations than fully connected networks.", "title": "" }, { "docid": "f9f1cf949093c41a84f3af854a2c4a8b", "text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.", "title": "" }, { "docid": "153f452486e2eacb9dc1cf95275dd015", "text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.", "title": "" }, { "docid": "f31ec6460f0e938f8e43f5b9be055aaf", "text": "Many people have turned to technological tools to help them be physically active. To better understand how goal-setting, rewards, self-monitoring, and sharing can encourage physical activity, we designed a mobile phone application and deployed it in a four-week field study (n=23). Participants found it beneficial to have secondary and primary weekly goals and to receive non-judgmental reminders. However, participants had problems with some features that are commonly used in practice and suggested in the literature. For example, trophies and ribbons failed to motivate most participants, which raises questions about how such rewards should be designed. A feature to post updates to a subset of their Facebook NewsFeed created some benefits, but barriers remained for most participants.", "title": "" }, { "docid": "1169d70de6d0c67f52ecac4d942d2224", "text": "All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8% for a field test with 276 drivers, resulting in a relative error reduction of 55% over driver models that use raw pedal operation signals without spectral analysis", "title": "" }, { "docid": "cdee51ab9562e56aee3fff58cd2143ba", "text": "Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.", "title": "" }, { "docid": "3baec781f7b5aaab8598c3628ea0af3b", "text": "Article history: Received 15 November 2010 Received in revised form 9 February 2012 Accepted 15 February 2012 Information professionals performing business activity related investigative analysis must routinely associate data from a diverse range of Web based general-interest business and financial information sources. XBRL has become an integral part of the financial data landscape. At the same time, Open Data initiatives have contributed relevant financial, economic, and business data to the pool of publicly available information on the Web but the use of XBRL in combination with Open Data remains at an early state of realisation. In this paper we argue that Linked Data technology, created for Web scale information integration, can accommodate XBRL data and make it easier to combine it with open datasets. This can provide the foundations for a global data ecosystem of interlinked and interoperable financial and business information with the potential to leverage XBRL beyond its current regulatory and disclosure role. We outline the uses of Linked Data technologies to facilitate XBRL consumption in conjunction with non-XBRL Open Data, report on current activities and highlight remaining challenges in terms of information consolidation faced by both XBRL and Web technologies. © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "d4ed4cad670b1e11cfb3c869e34cf9fd", "text": "BACKGROUND\nDespite the many antihypertensive medications available, two-thirds of patients with hypertension do not achieve blood pressure control. This is thought to be due to a combination of poor patient education, poor medication adherence, and \"clinical inertia.\" The present trial evaluates an intervention consisting of health coaching, home blood pressure monitoring, and home medication titration as a method to address these three causes of poor hypertension control.\n\n\nMETHODS/DESIGN\nThe randomized controlled trial will include 300 patients with poorly controlled hypertension. Participants will be recruited from a primary care clinic in a teaching hospital that primarily serves low-income populations.An intervention group of 150 participants will receive health coaching, home blood pressure monitoring, and home-titration of antihypertensive medications during 6 months. The control group (n=150) will receive health coaching plus home blood pressure monitoring for the same duration. A passive control group will receive usual care. Blood pressure measurements will take place at baseline, and after 6 and 12 months. The primary outcome will be change in systolic blood pressure after 6 and 12 months. Secondary outcomes measured will be change in diastolic blood pressure, adverse events, and patient and provider satisfaction.\n\n\nDISCUSSION\nThe present study is designed to assess whether the 3-pronged approach of health coaching, home blood pressure monitoring, and home medication titration can successfully improve blood pressure, and if so, whether this effect persists beyond the period of the intervention.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier: NCT01013857.", "title": "" }, { "docid": "c61b210036484009cf8077a803824695", "text": "Synthetic Aperture Radar (SAR) image is disturbed by multiplicative noise known as speckle. In this paper, based on the power of deep fully convolutional network, an encoding-decoding framework is introduced for multisource SAR image despeckling. The network contains a series of convolution and deconvolution layers, forming an end-to-end non-linear mapping between noise and clean SAR images. With addition of skip connection, the network can keep image details and accomplish the strategy for residual learning which solves the notorious problem of vanishing gradients and accelerates convergence. The experimental results on simulated and real SAR images show that the introduced approach achieves improvements in both despeckling performance and time efficiency over the state-of-the-art despeckling methods.", "title": "" }, { "docid": "8fb598f1f55f7a20bfc05865fc0a5efa", "text": "The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem for model-based anomaly detection. We introduce a long short-term memory-based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution by introducing a progress-based varying prior. Our LSTM-VAE-based detector reports an anomaly when a reconstruction-based anomaly score is higher than a state-based threshold. For evaluations with 1555 robot-assisted feeding executions, including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve of 0.8710 than 5 other baseline detectors from the literature. We also show the variational autoencoding and state-based thresholding are effective in detecting anomalies from 17 raw sensory signals without significant feature engineering effort.", "title": "" }, { "docid": "577c557bc6fcddcb51e962e68ed034ed", "text": "Text categorization is used to assign each text document to predefined categories. This paper presents a new text classification method for classifying Chinese text based on Rocchio algorithm. We firstly use the TFIDF to extract document vectors from the training documents which have been correctly categorized, and then use those document vectors to generate codebooks as classification models using the LBG and Rocchio algorithm. The codebook is then used to categorize the target documents using vector scores. We tested this method in the experiment and the result shows that this method can achieve better performance.", "title": "" }, { "docid": "d72652b6ad54422e6864baccc88786a8", "text": "Neisseria meningitidis is a major global pathogen that continues to cause endemic and epidemic human disease. Initial exposure typically occurs within the nasopharynx, where the bacteria can invade the mucosal epithelium, cause fulminant sepsis, and disseminate to the central nervous system, causing bacterial meningitis. Recently, Chamot-Rooke and colleagues1 described a unique virulence property of N. meningitidis in which the bacterial surface pili, after contact with host cells, undergo a modification that facilitates both systemic invasion and the spread of colonization to close contacts. Person-to-person spread of N. meningitidis can result in community epidemics of bacterial meningitis, with major consequences for public health. In resource-poor nations, cyclical outbreaks continue to result in high mortality and long-term disability, particularly in sub-Saharan Africa, where access to early diagnosis, antibiotic therapy, and vaccination is limited.2,3 An exclusively human pathogen, N. meningitidis uses several virulence factors to cause disease. Highly charged and hydrophilic capsular polysaccharides protect N. meningitidis from phagocytosis and complement-mediated bactericidal activity of the innate immune system. A family of proteins (called opacity proteins) on the bacterial outer membrane facilitate interactions with both epithelial and endothelial cells. These proteins are phase-variable — that is, the genome of the bacterium encodes related opacity proteins that are variably expressed, depending on environment, allowing the bacterium to adjust to rapidly changing environmental conditions. Lipooligosaccharide, analogous to the lipopolysaccharide of enteric gram-negative bacteria, contains a lipid A moiety with endotoxin activity that promotes the systemic sepsis encountered clinically. However, initial attachment to host cells is primarily mediated by filamentous organelles referred to as type IV pili, which are common to many bacterial pathogens and unique in their ability to undergo both antigenic and phase variation. Within hours of attachment to the host endothelial cell, N. meningitidis induces the formation of protrusions in the plasma membrane of host cells that aggregate the bacteria into microcolonies and facilitate pili-mediated contacts between bacteria and between bacteria and host cells. After attachment and aggregation, N. meningitidis detaches from the aggregates to systemically invade the host, by means of a transcellular pathway that crosses the respiratory epithelium,4 or becomes aerosolized and spreads the colonization of new hosts (Fig. 1). Chamot-Rooke et al. dissected the molecular mechanism underlying this critical step of systemic invasion and person-to-person spread and reported that pathogenesis depends on a unique post-translational modification of the type IV pili. Using whole-protein mass spectroscopy, electron microscopy, and molecular modeling, they showed that the major component of N. meningitidis type IV pili (called PilE or pilin) undergoes an unusual post-translational modification by phosphoglycerol. Expression of pilin phosphotransferase, the enzyme that transfers phosphoglycerol onto pilin, is increased within 4 hours of meningococcus contact with host cells and modifies the serine residue at amino acid position 93 of pilin, altering the charge of the pilin structure and thereby destabilizing the pili bundles, reducing bacterial aggregation, and promoting detachment from the cell surface. Strains of N. meningitidis in which phosphoglycerol modification of pilin occurred had a greatly enhanced ability to cross epithelial monolayers, a finding that supports the view that this virulence property, which causes deaggregation, promotes both transmission to new hosts and systemic invasion. Although this new molecular understanding of N. meningitidis virulence in humans is provoc-", "title": "" }, { "docid": "83f970bc22a2ada558aaf8f6a7b5a387", "text": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact, that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R. Introduction In almost every domain from industry (Billinton et al., 1996) to biology (Bar-Joseph et al., 2003), finance (Taylor, 2007) up to social science (Gottman, 1981) different time series data are measured. While the recorded datasets itself may be different, one common problem are missing values. Many analysis methods require missing values to be replaced with reasonable values up-front. In statistics this process of replacing missing values is called imputation. Time series imputation thereby is a special sub-field in the imputation research area. Most popular techniques like Multiple Imputation (Rubin, 1987), Expectation-Maximization (Dempster et al., 1977), Nearest Neighbor (Vacek and Ashikaga, 1980) and Hot Deck (Ford, 1983) rely on interattribute correlations to estimate values for the missing data. Since univariate time series do not possess more than one attribute, these algorithms cannot be applied directly. Effective univariate time series imputation algorithms instead need to employ the inter-time correlations. On CRAN there are several packages solving the problem of imputation of multivariate data. Most popular and mature (among others) are AMELIA (Honaker et al., 2011), mice (van Buuren and Groothuis-Oudshoorn, 2011), VIM (Kowarik and Templ, 2016) and missMDA (Josse and Husson, 2016). However, since these packages are designed for multivariate data imputation only they do not work for univariate time series. At the moment imputeTS (Moritz, 2016a) is the only package on CRAN that is solely dedicated to univariate time series imputation and includes multiple algorithms. Nevertheless, there are some other packages that include imputation functions as addition to their core package functionality. Most noteworthy being zoo (Zeileis and Grothendieck, 2005) and forecast (Hyndman, 2016). Both packages offer also some advanced time series imputation functions. The packages spacetime (Pebesma, 2012), timeSeries (Rmetrics Core Team et al., 2015) and xts (Ryan and Ulrich, 2014) should also be mentioned, since they contain some very simple but quick time series imputation methods. For a broader overview about available time series imputation packages in R see also (Moritz et al., 2015). In this technical report we evaluate the performance of several univariate imputation functions in R on different time series. This paper is structured as follows: Section Overview imputeTS package gives an overview, about all features and functions included in the imputeTS package. This is followed by Usage examples of the different provided functions. The paper ends with a Conclusions section. Overview imputeTS package The imputeTS package can be found on CRAN and is an easy to use package that offers several utilities for ’univariate, equi-spaced, numeric time series’. Univariate means there is just one attribute that is observed over time. Which leads to a sequence of single observations o1, o2, o3, ... on at successive points t1, t2, t3, ... tn in time. Equi-spaced means, that time increments between successive data points are equal |t1 − t2| = |t2 − t3| = ... = |tn−1 − tn|. Numeric means that the observations are measurable quantities that can be described as a number. In the first part of this section, a general overview about all available functions and datasets is given. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 2 This is followed by more detailed overviews about the three areas covered by the package: ’Plots & Statistics’, ’Imputation’ and ’Datasets’. Information about how to apply these functions and tools can be found later in the Usage examples section. General overview As can be seen in Table 1, beyond several imputation algorithm implementations the package also includes plotting functions and datasets. The imputation algorithms can be divided into rather simple but fast approaches like mean imputation and more advanced algorithms that need more computation time like kalman smoothing on a structural model. Simple Imputation Imputation Plots & Statistics Datasets na.locf na.interpolation plotNA.distribution tsAirgap na.mean na.kalman plotNA.distributionBar tsAirgapComplete na.random na.ma plotNA.gapsize tsHeating na.replace na.seadec plotNA.imputations tsHeatingComplete na.remove na.seasplit statsNA tsNH4 tsNH4Complete Table 1: General Overview imputeTS package As a whole, the package aims to support the user in the complete process of replacing missing values in time series. This process starts with analyzing the distribution of the missing values using the statsNA function and the plots of plotNA.distribution, plotNA.distributionBar, plotNA.gapsize. In the next step the actual imputation can take place with one of the several algorithm options. Finally, the imputation results can be visualized with the plotNA.imputations function. Additionally, the package contains three datasets, each in a version with and without missing values, that can be used to test imputation algorithms. Plots & Statistics functions An overview about the available plots and statistics functions can be found in Table 2. To get a good impression what the plots look like section Usage examples is recommended. Function Description plotNA.distribution Visualize Distribution of Missing Values plotNA.distributionBar Visualize Distribution of Missing Values (Barplot) plotNA.gapsize Visualize Distribution of NA gap sizes plotNA.imputations Visualize Imputed Values statsNA Print Statistics about the Missing Data Table 2: Overview Plots & Statistics The statsNA function calculates several missing data statistics of the input data. This includes overall percentage of missing values, absolute amount of missing values, amount of missing value in different sections of the data, longest series of consecutive NAs and occurrence of consecutive NAs. The plotNA.distribution function visualizes the distribution of NAs in a time series. This is done using a standard time series plot, in which areas with missing data are colored red. This enables the user to see at first sight where in the series most of the missing values are located. The plotNA.distributionBar function provides the same insights to users, but is designed for very large time series. This is necessary for time series with 1000 and more observations, where it is not possible to plot each observation as a single point. The plotNA.gapsize function provides information about consecutive NAs by showing the most common NA gap sizes in the time series. The plotNA.imputations function is designated for visual inspection of the results after applying an imputation algorithm. Therefore, newly imputed observations are shown in a different color than the rest of the series. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 3 Imputation functions An overview about all available imputation algorithms can be found in Table 3. Even if these functions are really easy applicable, some examples can be found later in section Usage examples. More detailed information about the theoretical background of the algorithms can be found in the imputeTS manual (Moritz, 2016b). Function Option Description na.interpolation linear Imputation by Linear Interpolation spline Imputation by Spline Interpolation stine Imputation by Stineman Interpolation na.kalman StructTS Imputation by Structural Model & Kalman Smoothing auto.arima Imputation by ARIMA State Space Representation & Kalman Sm. na.locf locf Imputation by Last Observation Carried Forward nocb Imputation by Next Observation Carried Backward na.ma simple Missing Value Imputation by Simple Moving Average linear Missing Value Imputation by Linear Weighted Moving Average exponential Missing Value Imputation by Exponential Weighted Moving Average na.mean mean MissingValue Imputation by Mean Value median Missing Value Imputation by Median Value mode Missing Value Imputation by Mode Value na.random Missing Value Imputation by Random Sample na.replace Replace Missing Values by a Defined Value na.seadec Seasonally Decomposed Missing Value Imputation na.seasplit Seasonally Splitted Missing Value Imputation na.remove Remove Missing Values Table 3: Overview Imputation Algorithms For convenience similar algorithms are available under one function name as parameter option. For example linear, spline and stineman interpolation are all included in the na.interpolation function. The na.mean, na.locf, na.replace, na.random functions are all simple and fast. In comparison, na.interpolation, na.kalman, na.ma, na.seasplit, na.seadec are more advanced algorithms that need more computation time. The na.remove function is a special case, since it only deletes all missing values. Thus, it is not really an imputation function. It should be handled with care since removing observations may corrupt the time information of the series. The na.seasplit and na.seadec functions are as well exceptions. These perform seasonal split / decomposition operations as a preprocessing step. For the imputation itself, one out of the other imputation algorithms can be used (which one can be set as option). Looking at all available imputation methods, no single overall best method can b", "title": "" }, { "docid": "ab44369792f03c9d1a171789fca24001", "text": "High-speed actions are known to impact soccer performance and can be categorized into actions requiring maximal speed, acceleration, or agility. Contradictory findings have been reported as to the extent of the relationship between the different speed components. This study comprised 106 professional soccer players who were assessed for 10-m sprint (acceleration), flying 20-m sprint (maximum speed), and zigzag agility performance. Although performances in the three tests were all significantly correlated (p < 0.0005), coefficients of determination (r(2)) between the tests were just 39, 12, and 21% for acceleration and maximum speed, acceleration and agility, and maximum speed and agility, respectively. Based on the low coefficients of determination, it was concluded that acceleration, maximum speed, and agility are specific qualities and relatively unrelated to one another. The findings suggest that specific testing and training procedures for each speed component should be utilized when working with elite players.", "title": "" }, { "docid": "6d5429ddf4050724432da73af60274d6", "text": "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the subsentence or “concept”-level, to a sentencelevel model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.", "title": "" }, { "docid": "055cb9aca6b16308793944154dc7866a", "text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?", "title": "" }, { "docid": "2d259ed5d3a1823da7cf54302d8ad1a6", "text": "We present Lynx-robot, a quadruped, modular, compliant machine. It alternately features a directly actuated, single-joint spine design, or an actively supported, passive compliant, multi-joint spine configuration. Both spine configurations bend in the sagittal plane. This study aims at characterizing these two, largely different spine concepts, for a bounding gait of a robot with a three segmented, pantograph leg design. An earlier, similar-sized, bounding, quadruped robot named Bobcat with a two-segment leg design and a directly actuated, single-joint spine design serves as a comparison robot, to study and compare the effect of the leg design on speed, while keeping the spine design fixed. Both proposed spine designs (single rotatory and active and multi-joint compliant) reach moderate, self-stable speeds.", "title": "" }, { "docid": "03966c28d31e1c45896eab46a1dcce57", "text": "For many applications it is useful to sample from a nite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M suuciently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be diicult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly eecient. Using our approach one can sample from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a nite distributive lattice.", "title": "" } ]
scidocsrr
78cbc33673f79fb2d27cdd17125660f7
On security and privacy issues of fog computing supported Internet of Things environment
[ { "docid": "55a6353fa46146d89c7acd65bee237b5", "text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.", "title": "" }, { "docid": "1e5956b0d9d053cd20aad8b53730c969", "text": "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.", "title": "" } ]
[ { "docid": "d679e7cbef9ac3cfbea38b92891fc1a0", "text": "Personal health records (PHR) have enormous potential to improve both documentation of health information and patient care. The adoption of these systems, however, has been relatively slow. In this work, we used a multi-method approach to evaluate PHR systems. We interviewed potential end users---clinicians and patients---and conducted evaluations with patients and caregivers as well as a heuristic evaluation with HCI experts. In these studies, we focused on three PHR systems: Google Health, Microsoft HealthVault, and WorldMedCard. Our results demonstrate that both usability concerns and socio-cultural influences are barriers to PHR adoption and use. In this paper, we present those results as well as reflect on how both PHR designers and developers might address these issues now and throughout the design cycle.", "title": "" }, { "docid": "4d56abf003caaa11e5bef74a14bd44e0", "text": "The increasing importance of search engines to commercial web sites has given rise to a phenomenon we call \"web spam\", that is, web pages that exist only to mislead search engines into (mis)leading users to certain web sites. Web spam is a nuisance to users as well as search engines: users have a harder time finding the information they need, and search engines have to cope with an inflated corpus, which in turn causes their cost per query to increase. Therefore, search engines have a strong incentive to weed out spam web pages from their index.We propose that some spam web pages can be identified through statistical analysis: Certain classes of spam pages, in particular those that are machine-generated, diverge in some of their properties from the properties of web pages at large. We have examined a variety of such properties, including linkage structure, page content, and page evolution, and have found that outliers in the statistical distribution of these properties are highly likely to be caused by web spam.This paper describes the properties we have examined, gives the statistical distributions we have observed, and shows which kinds of outliers are highly correlated with web spam.", "title": "" }, { "docid": "5dddbb144a947892fd7bfcc041263e3c", "text": "The ability of deep convolutional neural networks (CNNs) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep CNN architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a “shallow” dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.", "title": "" }, { "docid": "27e10b0ba009a8b86431a808e712d761", "text": "In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor often and beyond. Recent advances in ptychography have demonstrated that one can image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an XY translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10× and more are achievable. Furthermore, experimental results from our proof-of-concept systems show resolution gains of 4 × -7× for real scenes. All experimental data and code is made publicly available on the project webpage. Finally, we introduce and analyze in simulation a new strategy to capture macroscopic Fourier Ptychography images in a single snapshot, albeit using a camera array.", "title": "" }, { "docid": "86f5c3e7b238656ae5f680db6ce0b7f5", "text": "It is important to study and analyse educational data especially students’ performance. Educational Data Mining (EDM) is the field of study concerned with mining educational data to find out interesting patterns and knowledge in educational organizations. This study is equally concerned with this subject, specifically, the students’ performance. This study explores multiple factors theoretically assumed to affect students’ performance in higher education, and finds a qualitative model which best classifies and predicts the students’ performance based on related personal and social factors. Keywords—Data Mining; Education; Students; Performance; Patterns", "title": "" }, { "docid": "d01e73d5437f1c1de3c0b3c2fb502bf4", "text": "The present study investigated the effects of loneliness, depression and perceived social support on problematic Internet use among university students. The participants were 459 students at two universities in Turkey. The study data were collected with a Questionnaire Form, Problematic Internet Use Scale (PIUS), University of California at Los Angeles (UCLA) Loneliness Scale (Version 3), Multidimensional Scale of Perceived Social Support (MSPSS) and Beck Depression Inventory (BDI). The Mann-Whitney U Test and Kruskal-Wallis one-way analysis of variance were conducted to examine the differences; and correlation and regression analyses were used to examine the relationships between variables. There was a positive significant correlation between the PIUS and MSPSS and the UCLA Loneliness Scale and a negative significant correlation between the PIUS and Beck Depression Scale (BDS). The female students had higher total PIUS scores. The results also illustrated that there was a statistically significant difference in total PIUS scores according to having a social network account. Address for correspondence : Dr. Murat Ozsaker Celal Bayar University, School of Physical Education and Sport, 45040 Manisa, Turkey Telephone: +90 236 231 30 02 Fax: +90 236 231 30 01 E-mail: muratozsaker@yahoo.com INTRODUCTION The Internet has become the leading tool of communication in the 21st century. With a gradual increase in the public use of the Internet and widening differences in user profiles, it has become inevitable to study both the negative effects of the Internet and its positive contributions, such as sharing knowledge and facilitating communication between people (Odaci and Kalkan 2010). Internet use may be beneficial or benign when kept to ‘normal’ levels, however, high levels of internet use which interfere with daily life have been linked to a range of problems, including decreased psychosocial wellbeing, relationship breakdown and neglect of domestic, academic and work responsibilities (Gonul 2002; Hardie and Yi-Tee 2007). The concept of “problematic internet use” revealed when individual cannot control internet use. “Problematic Internet use” (Beard and Wolf 2001; Davis et al. 2002) which is also called as “pathological Internet use” (Davis 2001; Morahan-Martin and Schumacher 2000) revealed itself as spending time on the Internet more and more, not being able to stop the desire to access to the Internet and continuing to use it despite the deterioration of mental preoccupation and functioning in various areas regarding Internet use. The studies have shown that Internet use is comparatively more common among university students (Morahan-Martin and Schumacher 2000; Nalwa and Anand 2003; Niemz et al. 2006; SPO 2008). A study carried out by the Turkish State Planning Organization (SPO) with a larger sample (2008) suggested that 16-24 year-old young people compose the leading group of Internet users (65.6%), that Internet use increases with educational status (87.7%) and students are the top users of the Internet (82.2%) (State Planning Organization Information Society Statistics 534 MURAT OZSAKER, GONCA KARAYAGIZ MUSLU, AYSE KAHRAMAN ET AL. 2008). As a result, young Internet users are more likely to develop Internet addiction (Chou et al. 2005). The higher levels of Internet addiction among university students may result from a variety of reasons. They may encounter many challenges (gaining independence, seeking a better career, adapting to peer groups) with their new life at university. Some university students may not successfully cope with such novelties and difficulties and they may potentially develop depression or stress, which may lead to an escape into the online world (Celik and Odaci 2012). Thus, it proved to be elemental to investigate the correlation between Internet use and mental problems of students in developing preventive guidance programs against Internet addiction. An easier and faster Internet access at universities may also enhance the risk of university students getting involved with negative effects. Ceyhan (2010) argued that the findings of different studies on problematic Internet use would enable us to make generalizations and to understand the nature of this behavior better. In Turkey, there is a great need for studies on problematic Internet use (PIU) of university students. In Turkey, there is a great need for studies on problematic Internet use (PIU) of university students. Socio-demographic Features and Problematic Internet Use Studies in the literature delve into the relationship between problematic Internet use and variables like gender (Serin 2011; Ceyhan and Ceyhan 2007; Celik and Odaci 2012; MorahanMartin and Schumacher 2000; Odaci and Kalkan 2010; Tekinarslan and Gurer 2011; Weiser 2000), age/class level (Ceyhan and Ceyhan 2007; Johanson and Götestam 2004). However, studies with different sampling characteristics revealed different implications regarding some predictor variables including gender. In Turkey, studies of university student pupils similarly mentioned that boys use computers pathological more than girls (Serin 2011; Ceyhan and Ceyhan 2007; Celik and Odaci 2012; Odaci and Kalkan 2010; Tekinarslan and Gurer 2011). However, some of the studies points that there are no gender differences in the PIU levels of the students (Ceyhan et al. 2009; Davis et al. 2002; Hardie and Yi-Tee 2007; Odaci and Celik 2011). Similarly, studies with different sampling characteristics revealed different implications regarding age (Hardie and Yi-Tee 2007; Niemz et al. 2005). Further, there are still some controversy particularly about age issue in the PIU literature. Time Spent Online and Problematic Internet Use Time spent on the Internet is one of the most important criteria of diagnosis for problematic Internet use. The more time spent using the Internet, the higher the possibility of problematic Internet use. The researchers investigated the relationship between PIU and time spent online (Morahan-Martin and Schumacher 2000; Odaci and Kalkan 2010) and purpose of internet usage (Caplan 2002; Chak and Leung 2004). People, who are addicted to the Internet, obviously make intense and frequent use of the Internet measuring in per week. Especially, due to the purposes of internet use such as gambling, gaming, chatting and so forth individuals may spend more time when online, and this may result in the PIU (Morahan-Martin and Schumacher 2000; Tekinarslan and Gurer 2011). The studies resulted also show that the more time spent on the Internet; the more likely were to have problematic Internet use and unhealthier lifestyles. Internet use changed with regard to several lifestyle-related factors including decreases in physical activity, increases in time spent on the Internet, shorter durations or lack of sleep, and increasingly irregular dietary habits and poor eating patterns (Kim and Chun 2005; Lam et al. 2009). Loneliness, Depression, Social Support and Problematic Internet Use Recent studies on the Internet mainly focus on psychosocial wellness and Internet use, which particularly emphasized the correlation between PIU and depression (Shapira et al. 2000), loneliness (Serin 2011; Caplan 2007; Ceyhan and Ceyhan 2008; Davis 2001; Davis et al. 2002; DurakBatigun and Hasta 2010; Gross et al. 2002; Hardie and Yi-Tee 2007; Kim et al. 2009; MorahanMartin and Schumacher 2003; Odaci and Kalkan 2010), social support (Hardie and Yi-Tee 2007; Keser-Ozcan and Buzlu 2005; Swickert et al. 2002) and interpersonal distortion (Kalkan 2012) at university students. Davis (2001) suggested that psychosocial problems, such as loneliness and depression, are the precursors of PIU and lonely PROBLEMATIC INTERNET USE AMONG UNIVERSITY STUDENTS 535 and depressed people are more prone to prefer online interaction. This, further, acknowledged that individuals with lower levels of communication skills prefer online communication to faceto-face communication and who reportedly experience difficulties in controlling the time spent online (Davis 2001). Shaw and Gant (2002) stated that more internet use was associated with an increase in perceived social support but also decrease in loneliness. In a study it was found that lonely individuals can develop a preference for online social interaction and it can cause problematic internet use (Caplan 2003). In Turkey, Odaci and Kalkan (2010) additionally noted that PIU among university students increases with higher levels of loneliness. Ceyhan and Ceyhan (2008) stated that individuals experiencing the feeling of loneliness tend to have more PIU behavior. Based on these theoretical frameworks, this analytical study aims to conduct a thorough analysis of the effects of loneliness, depression and perceived social support on problematic Internet use among university students. The hypotheses of the study are as following: 1. There is a significant difference between students’ gender and levels of problematic Internet use. 2. There is a significant difference between students’ age and levels of problematic Internet use. 3. There is a significant difference between levels of problematic Internet use and students’ length of Internet use. 4. There is a significant difference between levels of problematic Internet use and a social network accounts 5. There is a significant correlation between students’ problematic Internet use and loneliness, depression and social support levels. MATERIAL AND METHODS", "title": "" }, { "docid": "0b22284d575fb5674f61529c367bb724", "text": "The scapula fulfils many roles to facilitate optimal function of the shoulder. Normal function of the shoulder joint requires a scapula that can be properly aligned in multiple planes of motion of the upper extremity. Scapular dyskinesis, meaning abnormal motion of the scapula during shoulder movement, is a clinical finding commonly encountered by shoulder surgeons. It is best considered an impairment of optimal shoulder function. As such, it may be the underlying cause or the accompanying result of many forms of shoulder pain and dysfunction. The present review looks at the causes and treatment options for this indicator of shoulder pathology and aims to provide an overview of the management of disorders of the scapula.", "title": "" }, { "docid": "7543281174d7dc63e180249d94ad6c07", "text": "Enriching speech recognition output with sentence boundaries improves its human readability and enables further processing by downstream language processing modules. We have constructed a hidden Markov model (HMM) system to detect sentence boundaries that uses both prosodic and textual information. Since there are more nonsentence boundaries than sentence boundaries in the data, the prosody model, which is implemented as a decision tree classifier, must be constructed to effectively learn from the imbalanced data distribution. To address this problem, we investigate a variety of sampling approaches and a bagging scheme. A pilot study was carried out to select methods to apply to the full NIST sentence boundary evaluation task across two corpora (conversational telephone speech and broadcast news speech), using both human transcriptions and recognition output. In the pilot study, when classification error rate is the performance measure, using the original training set achieves the best performance among the sampling methods, and an ensemble of multiple classifiers from different downsampled training sets achieves slightly poorer performance, but has the potential to reduce computational effort. However, when performance is measured using receiver operating characteristics (ROC) or area under the curve (AUC), then the sampling approaches outperform the original training set. This observation is important if the 0885-2308/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.csl.2005.06.002 * Corresponding author. Tel.: +1 510 666 2993; fax: +510 666 2956. E-mail addresses: yangl@icsi.berkeley.edu (Y. Liu), nchawla@cse.nd.edu (N.V. Chawla), harper@ecn.purdue.edu (M.P. Harper), ees@speech.sri.com (E. Shriberg), stolcke@speech.sri.com (A. Stolcke). Y. Liu et al. / Computer Speech and Language 20 (2006) 468–494 469 sentence boundary detection output is used by downstream language processing modules. Bagging was found to significantly improve system performance for each of the sampling methods. The gain from these methods may be diminished when the prosody model is combined with the language model, which is a strong knowledge source for the sentence detection task. The patterns found in the pilot study were replicated in the full NIST evaluation task. The conclusions may be dependent on the task, the classifiers, and the knowledge combination approach. 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4b84b6936669a2496e5172de0023c965", "text": "We present a patient with partial monosomy of the short arm of chromosome 18 caused by de novo translocation t(Y;18) and a generalized form of keratosis pilaris (keratosis pilaris affecting the skin follicles of the trunk, limbs and face-ulerythema ophryogenes). Two-color FISH with centromere-specific Y and 18 DNA probes identified the derivative chromosome 18 as a dicentric with breakpoints in p11.2 on both involved chromosomes. The patient had another normal Y chromosome. This is a third report the presence of a chromosome 18p deletion (and first case of a translocation involving 18p and a sex chromosome) with this genodermatosis. Our data suggest that the short arm of chromosome 18 is a candidate region for a gene causing keratosis pilaris. Unmasking of a recessive mutation at the disease locus by deletion of the wild type allele could be the cause of the recessive genodermatosis.", "title": "" }, { "docid": "ac2f02b46a885cf662c41a16f976819e", "text": "This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.", "title": "" }, { "docid": "91b924c8dbb22ca4593150c5fadfd38b", "text": "This paper investigates the power allocation problem of full-duplex cooperative non-orthogonal multiple access (FD-CNOMA) systems, in which the strong users relay data for the weak users via a full duplex relaying mode. For the purpose of fairness, our goal is to maximize the minimum achievable user rate in a NOMA user pair. More specifically, we consider the power optimization problem for two different relaying schemes, i.e., the fixed relaying power scheme and the adaptive relaying power scheme. For the fixed relaying scheme, we demonstrate that the power allocation problem is quasi-concave and a closed-form optimal solution is obtained. Then, based on the derived results of the fixed relaying scheme, the optimal power allocation policy for the adaptive relaying scheme is also obtained by transforming the optimization objective function as a univariate function of the relay transmit power $P_R$. Simulation results show that the proposed FD- CNOMA scheme with adaptive relaying can always achieve better or at least the same performance as the conventional NOMA scheme. In addition, there exists a switching point between FD-CNOMA and half- duplex cooperative NOMA.", "title": "" }, { "docid": "ca599d7b637d25835d881c6803a9e064", "text": "Accumulating research shows that prenatal exposure to maternal stress increases the risk for behavioral and mental health problems later in life. This review systematically analyzes the available human studies to identify harmful stressors, vulnerable periods during pregnancy, specificities in the outcome and biological correlates of the relation between maternal stress and offspring outcome. Effects of maternal stress on offspring neurodevelopment, cognitive development, negative affectivity, difficult temperament and psychiatric disorders are shown in numerous epidemiological and case-control studies. Offspring of both sexes are susceptible to prenatal stress but effects differ. There is not any specific vulnerable period of gestation; prenatal stress effects vary for different gestational ages possibly depending on the developmental stage of specific brain areas and circuits, stress system and immune system. Biological correlates in the prenatally stressed offspring are: aberrations in neurodevelopment, neurocognitive function, cerebral processing, functional and structural brain connectivity involving amygdalae and (pre)frontal cortex, changes in hypothalamo-pituitary-adrenal (HPA)-axis and autonomous nervous system.", "title": "" }, { "docid": "a0279756831dcba1dc1dee634e1d7e8b", "text": "Join order selection plays a significant role in query performance. Many modern database engine query optimizers use join order enumerators, cost models, and cardinality estimators to choose join orderings, each of which is based on painstakingly hand-tuned heuristics and formulae. Additionally, these systems typically employ static algorithms that ignore the end result (they do not “learn from their mistakes”). In this paper, we argue that existing deep reinforcement learning techniques can be applied to query planning. These techniques can automatically tune themselves, alleviating a massive human effort. Further, deep reinforcement learning techniques naturally take advantage of feedback, learning from their successes and failures. Towards this goal, we present ReJOIN, a proof-of-concept join enumerator. We show preliminary results indicating that ReJOIN can match or outperform the Postgres optimizer.", "title": "" }, { "docid": "3fb840309fcd22533cf86f57dbae22b5", "text": "Non-volatile RAM (NVRAM) makes it possible for data structures to tolerate transient failures, assuming however that programmers have designed these structures such that their consistency is preserved upon recovery. Previous approaches are typically transactional and inherently make heavy use of logging, resulting in implementations that are significantly slower than their DRAM counterparts. In this paper, we introduce a set of techniques aimed at lock-free data structures that, in the large majority of cases, remove the need for logging (and costly durable store instructions) both in the data structure algorithm and in the associated memory management scheme. Together, these generic techniques enable us to design what we call log-free concurrent data structures, which, as we illustrate on linked lists, hash tables, skip lists, and BSTs, can provide several-fold performance improvements over previous transaction-based implementations, with overheads of the order of milliseconds for recovery after a failure. We also highlight how our techniques can be integrated into practical systems, by presenting a durable version of Memcached that maintains the performance of its volatile counterpart.", "title": "" }, { "docid": "2fe0e5b0b49e886c9f99132f50beeea6", "text": "Practical wearable gesture tracking requires that sensors align with existing ergonomic device forms. We show that combining EMG and pressure data sensed only at the wrist can support accurate classification of hand gestures. A pilot study with unintended EMG electrode pressure variability led to exploration of the approach in greater depth. The EMPress technique senses both finger movements and rotations around the wrist and forearm, covering a wide range of gestures, with an overall 10-fold cross validation classification accuracy of 96%. We show that EMG is especially suited to sensing finger movements, that pressure is suited to sensing wrist and forearm rotations, and their combination is significantly more accurate for a range of gestures than either technique alone. The technique is well suited to existing wearable device forms such as smart watches that are already mounted on the wrist.", "title": "" }, { "docid": "78d879c810c64413825d7a243c9de78c", "text": "Algebra greatly broadened the very notion of algebra in two ways. First, the traditional numerical domains such as Z, Q R, and C, were now seen as instances of more general concepts of equationally-defined algebraic structure, which did not depend on any particular representation for their elements, but only on abstract sets of elements, operations on such elements, and equational properties satisfied by such operations. In this way, the integers Z were seen as an instance of the ring algebraic structure, that is, a set R with constants 0 and 1, and with addition + and mutiplication ∗ operations satisfying the equational axioms of the theory of rings, along with other rings such as the ring Zk of the residue classes of integers modulo k, the ring Z[x1, . . . , xn] of polynomials on n variables, and so on. Likewise, Q, R, and C were viewed as instances of the field structure, that is, a ring F together with a division operator / , so that each nonzero element x has an inverse 1/x with x ∗ (1/x) = 1, along with other fields such as the fields Zp, with p prime, the fields of rational functions Q(x1, . . . , xn), R(x1, . . . , xn), and C(x1, . . . , xn) (whose elements are quotients p/q with p, q polynomials and q , 0), and so on. A second way in which Abstract Algebra broadened the notion of algebra was by considering other equationally-defined structures besides rings and fields, such as monoids, groups, modules, vector spaces, and so on. This intimately connected algebra with other areas of mathematics such as geometry, analysis and topology in new ways, besides the already well-known connections with geometic figures defined as solutions of polynomal equations (the so-called algebraic varieties, such as algebraic curves or surfaces). Universal Algebra (the seminal paper is the one by Garett Birkhoff [4]), takes one more step in this line of generalization: why considering only the usual suspects: monoids, groups, rings, fields, modules, and vector spaces? Why not considering any algebraic structure defined by an arbitrary collection Σ of function symbols (called a signature), and obeying an arbitrary set E of equational axioms? And why not developing algebra in this much more general setting? That is, Universal Algebra is just Abstract Algebra brought to its full generality. Of course, generalization never stops, so that Universal Algebra itself has been further generalized in various directions. One of them, which we will fully pursue in this Part II and which, as we shall see, has many applications to Computer Science, is from considering a single set of data elements (unsorted algebras) to considering a family of such sets (many-sorted algebras), or a family of such sets but allowing subtype inclusions (order-sorted algebras). Three other, are: (i) replacing the underlying sets by richer structures such as posets, topological spaces, sheaves, or algebraic varieties, leading to notions such as those of an ordered algebra, a topological algebra, or an algebraic structure on a sheaf or on an algebraic variety; for example, an elliptic curve is a cubic curve having a commutative group structure; (ii) allowing not only finitary operations but also infinitary ones (we have already seen examples of such algebras with infinitary operations —namely, complete lattices and complete semi-lattices— in §7.5); and (iii) allowing operations to be partial functions, leading to the notion of a partial algebra. Order-sorted algebras already provide quite useful support for certain forms of partiality; and their generalization to algebras in membership equational logic provides full support for partiality (see [36, 39]).", "title": "" }, { "docid": "5d63c5820cc8035822b86ef5fdaebefd", "text": "As the third most popular social network among millennials, Snapchat is well known for its picture and video messaging system that deletes content after it is viewed. However, the Stories feature of Snapchat offers a different perspective of ephemeral content sharing, with pictures and videos that are available for friends to watch an unlimited number of times for 24 hours. We conduct-ed an in-depth qualitative investigation by interviewing 18 participants and reviewing 14 days of their Stories posts. We identify five themes focused on how participants perceive and use the Stories feature, and apply a Goffmanesque metaphor to our analysis. We relate the Stories medium to other research on self-presentation and identity curation in social media.", "title": "" }, { "docid": "cc5ef7b506f0532e7ee2c89957846d5b", "text": "In this paper, we present recent contributions for the battle against one of the main problems faced by search engines: the spamdexing or web spamming. They are malicious techniques used in web pages with the purpose of circumvent the search engines in order to achieve good visibility in search results. To better understand the problem and finding the best setup and methods to avoid such virtual plague, in this paper we present a comprehensive performance evaluation of several established machine learning techniques. In our experiments, we employed two real, public and large datasets: the WEBSPAM-UK2006 and the WEBSPAM-UK2007 collections. The samples are represented by content-based, link-based, transformed link-based features and their combinations. The found results indicate that bagging of decision trees, multilayer perceptron neural networks, random forest and adaptive boosting of decision trees are promising in the task of web spam classification. Keywords—Spamdexing; web spam; spam host; classification, WEBSPAM-UK2006, WEBSPAM-UK2007.", "title": "" }, { "docid": "a31652c0236fb5da569ffbf326eb29e5", "text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6", "title": "" } ]
scidocsrr
20f8e0c909c2cfe26a89e7fed1fd3cf0
Autonomous Driving in Reality with Reinforcement Learning and Image Translation
[ { "docid": "c1d84d7ba3fdae0f5f49c3740345fce5", "text": "Applying end-to-end learning to solve complex, interactive, pixeldriven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to highlevel policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our realworld experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on modelbased trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.", "title": "" }, { "docid": "6e02cdb0ade3479e0df03c30d9d69fa3", "text": "Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.", "title": "" }, { "docid": "eaa333d0473978268f0b7ca6b4969009", "text": "Autonomous driving is a multi-agent setting where the host vehicle must apply sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways. Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. Moreover, one must balance between unexpected behavior of other drivers/pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained. In this paper we apply deep reinforcement learning to the problem of forming long term driving strategies. We note that there are two major challenges that make autonomous driving different from other robotic tasks. First, is the necessity for ensuring functional safety — something that machine learning has difficulty with given that performance is optimized at the level of an expectation over many instances. Second, the Markov Decision Process model often used in robotics is problematic in our case because of unpredictable behavior of other agents in this multi-agent scenario. We make three contributions in our work. First, we show how policy gradient iterations can be used, and the variance of the gradient estimation using stochastic gradient ascent can be minimized, without Markovian assumptions. Second, we decompose the problem into a composition of a Policy for Desires (which is to be learned) and trajectory planning with hard constraints (which is not learned). The goal of Desires is to enable comfort of driving, while hard constraints guarantees the safety of driving. Third, we introduce a hierarchical temporal abstraction we call an “Option Graph” with a gating mechanism that significantly reduces the effective horizon and thereby reducing the variance of the gradient estimation even further. The Option Graph plays a similar role to “structured prediction” in supervised learning, thereby reducing sample complexity, while also playing a similar role to LSTM gating mechanisms used in supervised deep networks.", "title": "" } ]
[ { "docid": "d4724f6b007c914120508b2e694a31d9", "text": "Finding semantically related words is a first step in the dire ct on of automatic ontology building. Guided by the view that similar words occur in simi lar contexts, we looked at the syntactic context of words to measure their semantic sim ilarity. Words that occur in a direct object relation with the verb drink, for instance, have something in common ( liquidity, ...). Co-occurrence data for common nouns and proper names , for several syntactic relations, was collected from an automatically parsed corp us of 78 million words of newspaper text. We used several vector-based methods to compute the distributional similarity between words. Using Dutch EuroWordNet as evaluation stand ard, we investigated which vector-based method and which combination of syntactic rel ations is the strongest predictor of semantic similarity.", "title": "" }, { "docid": "a537edc6579892249d157e2dc2f31077", "text": "An efficient decoupling feeding network is proposed in this letter. It is composed of two directional couplers and two sections of transmission line for connection use. By connecting the two couplers, an indirect coupling with controlled magnitude and phase is introduced, which can be used to cancel out the direct coupling caused by space waves and surface waves between array elements. To demonstrate the method, a two-element microstrip antenna array with the proposed network has been designed, fabricated and measured. Both simulated and measured results have simultaneously proved that the proposed method presents excellent decoupling performance. The measured mutual coupling can be reduced to below -58 dB at center frequency. Meanwhile it has little influence on return loss and radiation patterns. The decoupling mechanism is simple and straightforward which can be easily applied in phased array antennas and MIMO systems.", "title": "" }, { "docid": "8fb5a9d2f68601d9e07d4a96ea45e585", "text": "The solid-state transformer (SST) is a promising power electronics solution that provides voltage regulation, reactive power compensation, dc-sourced renewable integration, and communication capabilities, in addition to the traditional step-up/step-down functionality of a transformer. It is gaining widespread attention for medium-voltage (MV) grid interfacing to enable increases in renewable energy penetration, and, commercially, the SST is of interest for traction applications due to its light weight as a result of medium-frequency isolation. The recent advancements in silicon carbide (SiC) power semiconductor device technology are creating a new paradigm with the development of discrete power semiconductor devices in the range of 10-15 kV and even beyond-up to 22 kV, as recently reported. In contrast to silicon (Si) IGBTs, which are limited to 6.5-kV blocking, these high-voltage (HV) SiC devices are enabling much simpler converter topologies and increased efficiency and reliability, with dramatic reductions of the size and weight of the MV power-conversion systems. This article presents the first-ever demonstration results of a three-phase MV grid-connected 100-kVA SST enabled by 15-kV SiC n-IGBTs, with an emphasis on the system design and control considerations. The 15-kV SiC n-IGBTs were developed by Cree and packaged by Powerex. The low-voltage (LV) side of the SST is built with 1,200-V, 100-A SiC MOSFET modules. The galvanic isolation is provided by three single-phase 22-kV/800-V, 10-kHz, 35-kVA-rated high-frequency (HF) transformers. The three-phase all-SiC SST that interfaces with 13.8-kV and 480-V distribution grids is referred to as a transformerless intelligent power substation (TIPS). The characterization of the 15-kV SiC n-IGBTs, the development of the MV isolated gate driver, and the design, control, and system demonstration of the TIPS were undertaken by North Carolina State University's (NCSU's) Future Renewable Electrical Energy Delivery and Management (FREEDM) Systems Center, sponsored by an Advanced Research Projects Agency-Energy (ARPA-E) project.", "title": "" }, { "docid": "b64652316fc9ac5d1a049ab29e770afa", "text": "Future cooperative Intelligent Transport Systems (ITS) applications aimed to improve safety, efficiency and comfort on our roads put high demands on the underlying wireless communication system. To gain better understanding of the limitations of the 5.9 GHz frequency band and the set of communication protocols for medium range vehicle to vehicle (V2V) communication, a set of field trials with CALM M5 enabled prototypes has been conducted. This paper describes five different real vehicle traffic scenarios covering both urban and rural settings at varying vehicle speeds and under varying line-of-sight (LOS) conditions and discusses the connectivity (measured as Packet Reception Ratio) that could be achieved between the two test vehicles. Our measurements indicate a quite problematic LOS sensitivity that strongly influences the performance of V2V-based applications. We further discuss how the awareness of these context-based connectivity problems can be used to improve the design of possible future cooperative ITS safety applications.", "title": "" }, { "docid": "46c86c37bb888f457075dbdb9f30a148", "text": "Traditionally, nuclear medicine has been regarded mainly as a diagnostic specialty in which the administration of radioactive substances yields information whose importance far outweighs any potential risk to normal tissue. Now, however, therapeutic applications of radionuclide-based approaches are at last gaining the prominence they deserve. In this field, individual patient dosimetry is essential both for optimising the administered activity through the establishment of minimum effective and maximum tolerated absorbed doses and for determining a dose-response relationship as a basis for predicting tumour response [2]. Unfortunately, the lack of comprehensive clinical trials designed to bring out the value of radionuclide dosimetry in predicting therapy outcome has fed a general belief that dosimetry methods are daunting, prone to a large degree of uncertainty, and liable to generate increased costs, and thus held back their progress. This is, indeed, why Flux and colleagues remarked “radionuclide therapy remains the Cinderella of cancer treatment modalities, under-utilised with respect to more conventional treatments” [3]. Now, however, the availability of new technologies (e.g. PET/CT) and radioisotope pairs for diagnosis and therapy (e.g. I and I or Y and Y), supported by a new generation of personal computer software for internal dose assessment, is at last leading to an increase in the use of dosimetry in clinical therapeutic applications. To show the appreciable progress now being made in this field we here take a look at a sample of the most recent literature on dosimetry for radioiodine treatment of benign thyroid disease and differentiated thyroid carcinoma.", "title": "" }, { "docid": "425c96a3ed2d88bbc9324101626c992d", "text": "Nonlocal image representation or group sparsity has attracted considerable interest in various low-level vision tasks and has led to several state-of-the-art image denoising techniques, such as BM3D, learned simultaneous sparse coding. In the past, convex optimization with sparsity-promoting convex regularization was usually regarded as a standard scheme for estimating sparse signals in noise. However, using convex regularization cannot still obtain the correct sparsity solution under some practical problems including image inverse problems. In this letter, we propose a nonconvex weighted <inline-formula><tex-math notation=\"LaTeX\">$\\ell _p$</tex-math></inline-formula> minimization based group sparse representation framework for image denoising. To make the proposed scheme tractable and robust, the generalized soft-thresholding algorithm is adopted to solve the nonconvex <inline-formula><tex-math notation=\"LaTeX\"> $\\ell _p$</tex-math></inline-formula> minimization problem. In addition, to improve the accuracy of the nonlocal similar patch selection, an adaptive patch search scheme is proposed. Experimental results demonstrate that the proposed approach not only outperforms many state-of-the-art denoising methods such as BM3D and weighted nuclear norm minimization, but also results in a competitive speed.", "title": "" }, { "docid": "9254b7c1f6a0393524d68aaa683dab58", "text": "Millions of users share their opinions on Twitter, making it a valuable platform for tracking and analyzing public sentiment. Such tracking and analysis can provide critical information for decision making in various domains. Therefore it has attracted attention in both academia and industry. Previous research mainly focused on modeling and tracking public sentiment. In this work, we move one step further to interpret sentiment variations. We observed that emerging topics (named foreground topics) within the sentiment variation periods are highly related to the genuine reasons behind the variations. Based on this observation, we propose a Latent Dirichlet Allocation (LDA) based model, Foreground and Background LDA (FB-LDA), to distill foreground topics and filter out longstanding background topics. These foreground topics can give potential interpretations of the sentiment variations. To further enhance the readability of the mined reasons, we select the most representative tweets for foreground topics and develop another generative model called Reason Candidate and Background LDA (RCB-LDA) to rank them with respect to their “popularity” within the variation period. Experimental results show that our methods can effectively find foreground topics and rank reason candidates. The proposed models can also be applied to other tasks such as finding topic differences between two sets of documents.", "title": "" }, { "docid": "ca0511810895cfdce607f4fc4df2f4f7", "text": "This paper presents an extension of existing software architecture tools to model physical systems, their interconnections, and the interactions between physical and cyber components. We introduce a new cyber-physical system (CPS) architectural style to support the construction of architectural descriptions of complete systems and to serve as the reference context for analysis and evaluation of design alternatives using existing model-based tools. The implementation of the CPS architectural style in AcmeStudio includes behavioral annotations on components and connectors using either finite state processes (FSP) or linear hybrid automata (LHA) with plug-ins to perform behavior analysis. The application of the CPS architectural style is illustrated for the STARMAC quadrotor.", "title": "" }, { "docid": "27dff3b0339eacbdc3ab3ad8d16598ca", "text": "In this paper we show a low cost and environmentally friendly fabrication for an agricultural sensing application. An antenna, a soil moisture sensor, and a leaf wetness sensor are inkjet-printed on paper substrate. A microprocessor attached to the paper substrate is capable of detecting the capacitance change on the surface of the sensor, and report the data over the wireless communication interface. This sensing system is useful to optimize irrigation systems.", "title": "" }, { "docid": "8f9bf08bb52e5c192512f7b43ed50ba7", "text": "Finding the sparse solution of an underdetermined system of linear equations (the so called sparse recovery problem) has been extensively studied in the last decade because of its applications in many different areas. So, there are now many sparse recovery algorithms (and program codes) available. However, most of these algorithms have been developed for real-valued systems. This paper discusses an approach for using available real-valued algorithms (or program codes) to solve complex-valued problems, too. The basic idea is to convert the complex-valued problem to an equivalent real-valued problem and solve this new real-valued problem using any real-valued sparse recovery algorithm. Theoretical guarantees for the success of this approach will be discussed, too. On the other hand, a widely used sparse recovery idea is finding the minimum ℓ1 norm solution. For real-valued systems, this idea requires to solve a linear programming (LP) problem, but for complex-valued systems it needs to solve a second-order cone programming (SOCP) problem, which demands more computational load. However, based on the approach of this paper, the complex case can also be solved by linear programming, although the theoretical guarantee for finding the sparse solution is more limited.", "title": "" }, { "docid": "dc817bc11276d76f8d97f67e4b1b2155", "text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.", "title": "" }, { "docid": "875e165e70000d15b11d724607be1917", "text": "Internet-based Chat environments such as Internet relay Chat and instant messaging pose a challenge for data mining and information retrieval systems due to the multi-threaded, overlapping nature of the dialog and the nonstandard usage of language. In this paper we present preliminary methods of topic detection and topic thread extraction that augment a typical TF-IDF-based vector space model approach with temporal relationship information between posts of the Chat dialog combined with WordNet hypernym augmentation. We show results that promise better performance than using only a TF-IDF bag-of-words vector space model.", "title": "" }, { "docid": "66b912a52197a98134d911ee236ac869", "text": "We survey the field of quantum information theory. In particular, we discuss the fundamentals of the field, source coding, quantum error-correcting codes, capacities of quantum channels, measures of entanglement, and quantum cryptography.", "title": "" }, { "docid": "c59652c2166aefb00469517cd270dea2", "text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM (Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection.", "title": "" }, { "docid": "5892af3dde2314267154a0e5a3c76985", "text": "We describe a method for on-line handwritten signature veri!cation. The signatures are acquired using a digitizing tablet which captures both dynamic and spatial information of the writing. After preprocessing the signature, several features are extracted. The authenticity of a writer is determined by comparing an input signature to a stored reference set (template) consisting of three signatures. The similarity between an input signature and the reference set is computed using string matching and the similarity value is compared to a threshold. Several approaches for obtaining the optimal threshold value from the reference set are investigated. The best result yields a false reject rate of 2.8% and a false accept rate of 1.6%. Experiments on a database containing a total of 1232 signatures of 102 individuals show that writer-dependent thresholds yield better results than using a common threshold. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "3854ead43024ebc6ac942369a7381d71", "text": "During the past two decades, the prevalence of obesity in children has risen greatly worldwide. Obesity in childhood causes a wide range of serious complications, and increases the risk of premature illness and death later in life, raising public-health concerns. Results of research have provided new insights into the physiological basis of bodyweight regulation. However, treatment for childhood obesity remains largely ineffective. In view of its rapid development in genetically stable populations, the childhood obesity epidemic can be primarily attributed to adverse environmental factors for which straightforward, if politically difficult, solutions exist.", "title": "" }, { "docid": "6cd955719bc34e153a48d591086bfc52", "text": "This paper proposes a rule-based approach for automatic configuration of mechatronic components in a novel agent-based manufacturing automation architecture known as MIRA, implemented using Prolog. Through this method, MIRAs are enriched with semantic knowledge representation and, based on that, perform some reasoning and decision making (both at the design stage and even during the operation) to achieve the desired goals. This approach is illustrated in a simple case study in which composition of a reconfigurable pick-and-place robot with various linear cylinders is achieved through rule-based reasoning.", "title": "" }, { "docid": "cb1645b5b37e99a1dac8c6af1d6b1027", "text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective countermeasures have drawn significant investment from governments, companies, and researchers. A large number of methods have been developed for automated hate speech detection online. This aims to classify textual content into non-hate or hate speech, in which case the method may also identify the targeting characteristics (i.e., types of hate, such as race, and religion) in the hate speech. However, we notice significant difference between the performance of the two (i.e., non-hate v.s. hate). In this work, we argue for a focus on the latter problem for practical reasons. We show that it is a much more challenging task, as our analysis of the language in the typical datasets shows that hate speech lacks unique, discriminative features and therefore is found in the ‘long tail’ in a dataset that is difficult to discover. We then propose Deep Neural Network structures serving as feature extractors that are particularly effective for capturing the semantics of hate speech. Our methods are evaluated on the largest collection of hate speech datasets based on Twitter, and are shown to be able to outperform the best performing method by up to 5 percentage points in macro-average F1, or 8 percentage points in the more challenging case of identifying hateful content.", "title": "" }, { "docid": "990067864c123b45e5c3d06ef1a0cf7d", "text": "BACKGROUND\nRetrospective single-centre series have shown the feasibility of sentinel lymph-node (SLN) identification in endometrial cancer. We did a prospective, multicentre cohort study to assess the detection rate and diagnostic accuracy of the SLN procedure in predicting the pathological pelvic-node status in patients with early stage endometrial cancer.\n\n\nMETHODS\nPatients with International Federation of Gynecology and Obstetrics (FIGO) stage I-II endometrial cancer had pelvic SLN assessment via cervical dual injection (with technetium and patent blue), and systematic pelvic-node dissection. All lymph nodes were histopathologically examined and SLNs were serial sectioned and examined by immunochemistry. The primary endpoint was estimation of the negative predictive value (NPV) of sentinel-node biopsy per hemipelvis. This is an ongoing study for which recruitment has ended. The study is registered with ClinicalTrials.gov, number NCT00987051.\n\n\nFINDINGS\nFrom July 5, 2007, to Aug 4, 2009, 133 patients were enrolled at nine centres in France. No complications occurred after injection of technetium colloid and no anaphylactic reactions were noted after patent blue injection. No surgical complications were reported during SLN biopsy, including procedures that involved conversion to open surgery. At least one SLN was detected in 111 of the 125 eligible patients. 19 of 111 (17%) had pelvic-lymph-node metastases. Five of 111 patients (5%) had an associated SLN in the para-aortic area. Considering the hemipelvis as the unit of analysis, NPV was 100% (95% CI 95-100) and sensitivity 100% (63-100). Considering the patient as the unit of analysis, three patients had false-negative results (two had metastatic nodes in the contralateral pelvic area and one in the para-aortic area), giving an NPV of 97% (95% CI 91-99) and sensitivity of 84% (62-95). All three of these patients had type 2 endometrial cancer. Immunohistochemistry and serial sectioning detected metastases undiagnosed by conventional histology in nine of 111 (8%) patients with detected SLNs, representing nine of the 19 patients (47%) with metastases. SLN biopsy upstaged 10% of patients with low-risk and 15% of those with intermediate-risk endometrial cancer.\n\n\nINTERPRETATION\nSLN biopsy with cervical dual labelling could be a trade-off between systematic lymphadenectomy and no dissection at all in patients with endometrial cancer of low or intermediate risk. Moreover, our study suggests that SLN biopsy could provide important data to tailor adjuvant therapy.\n\n\nFUNDING\nDirection Interrégionale de Recherche Clinique, Ile-de-France, Assistance Publique-Hôpitaux de Paris.", "title": "" }, { "docid": "afe36d039098b94a77ea58fa56bd895d", "text": "We present a framework to automatically detect and remove shadows in real world scenes from a single image. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The features are learned at the super-pixel level and along the dominant boundaries in the image. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow masks. Using the detected shadow masks, we propose a Bayesian formulation to accurately extract shadow matte and subsequently remove shadows. The Bayesian formulation is based on a novel model which accurately models the shadow generation process in the umbra and penumbra regions. The model parameters are efficiently estimated using an iterative optimization procedure. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.", "title": "" } ]
scidocsrr
a557433c4b7d7d54e55d49c7701c559f
The Pirogoff amputation for necrosis of the forefoot: a case report.
[ { "docid": "cf39011df767095d9aa1a4521da29fbb", "text": "Amputation is one of the oldest procedures in surgery; this is shown on the wall of the Temple of Rameses III near Luxor. In the foot, amputation between the tarsometatarsal level and the level of the Syme procedure results in an equinus deformity due to imbalance between tendons acting at the ankle. Syme’s amputation is simple and produces a stump with end-bearing properties and good proprioception (Harris 1956). Its disadvantages are shortening, some difficulty in fitting a prosthesis because of the thickening of the ankle, and occasional failure to achieve full weight-bearing due to migration of the fat pad. Boyd’s operation (1939) retains the calcaneus and fuses it with the tibia in the ankle mortise. It provides an excellent weight-bearing stump with no need for an artificial limb, but it has been discarded because of difficulty in obtaining sound calcaneotibial fusion (Mills 1981). During the war in Afghanistan, many soldiers and civilians were injured by mines, and had a foot or part of a foot blown off. In those with partial preservation of the hindfoot, we developed a modified Boyd amputation, using the talus as a bone graft. Three cases are reported. Patients and methods. Three male patients of average age 28 years were referred a few days after injury. All had lost the forefoot and showed equinus deformity of the remaining hindfoot (Fig. 1). The aim was to save as much of the foot as possible and to avoid a higher level of amputation. Repeated debridement was required before definitive operation (Fig. 2). Technique of operation. When the wound is clean, operation is performed under general anaesthesia, without a tourniquet. Two incisions are made. A curved incision behind the medial malleolus exposes the neurovascular bundle and the medial side of the ankle; a lateral incision behind the lateral malleolus exposes the peroneus tendons and the lateral ankle. Long plantar and dorsal flaps are preserved and the talus is removed in one piece, by careful dissection, tilting the calcaneus medially. The tibial surface of the ankle and the superior surface of the calcaneus are cleared of articular surface to create flat bony surfaces and bony and soft-tissue debris is removed. The talus is trimmed and reshaped to fit the tibia on its superior surface and flattened on its inferior surface to fit the prepared calcaneus. This block graft is inserted and held in position by a Steinmann pin passed from the heel through the calcaneus and the graft into the tibia. The wounds are irrigated and closed. A small incision over the posterior heel allows complete tenotomy of tendo Achillis. A belowknee cast is applied to protect the wound and to help maintain the correct position of the heel pad. The patient is mobilised, non-weightbearing on crutches. After ten days the wound is exposed, and if there is no infection, the skin defect on the anterior aspect of the stump (Fig. 3) is covered by a skin graft. The pin is retained until there is radiographic bony union at about 10 to 14 weeks. Important points. The medial vascular bundle must be exposed to assist preservation of the blood supply to the calcaneus. The division of tendo Achillis is essential to remove a deforming force and allow union without equinus deformity. Results. In all three patients there was bony union (Fig. 4), with eventual skin healing after secondary suture in one expose the femoral nerve to thermal lesions from cement (Simmons et al 1991). The use of certain reinforcing rings, especially when they are oversized, also carry a risk. The clinical diagnosis of femoral nerve palsy is usually obvious but it should be confirmed by EMG. The absence of clinical and electrodiagnostic recovery after four to six months is an indication for exploratory surgery. Conclusion. Complete division of the femoral nerve during THR is extremely rare and very few cases have been described. This danger should be recognised, however, in order to avoid surgery that may possibly injure the nerve. It is important to diagnose these nerve injuries and follow their course closely by regular examination so that repair may be performed at the earliest opportunity. No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article.", "title": "" } ]
[ { "docid": "20cb30a452bf20c9283314decfb7eb6e", "text": "In this paper, we apply bidirectional training to a long short term memory (LSTM) network for the first time. We also present a modified, full gradient version of the LSTM learning algorithm. We discuss the significance of framewise phoneme classification to continuous speech recognition, and the validity of using bidirectional networks for online causal tasks. On the TIMIT speech database, we measure the framewise phoneme classification scores of bidirectional and unidirectional variants of both LSTM and conventional recurrent neural networks (RNNs). We find that bidirectional LSTM outperforms both RNNs and unidirectional LSTM.", "title": "" }, { "docid": "79a2cc561cd449d8abb51c162eb8933d", "text": "We introduce a new test of how well language models capture meaning in children’s books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lowerfrequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.", "title": "" }, { "docid": "08af1b80f0e58fbaa75a5a61b9a716e3", "text": "Case Based Reasoning (CBR) is an important technique in artificial intelligence, which has been applied to various kinds of problems in a wide range of domains. Selecting case representation formalism is critical for the proper operation of the overall CBR system. In this paper, we survey and evaluate all of the existing case representation methodologies. Moreover, the case retrieval and future challenges for effective CBR are explained. Case representation methods are grouped in to knowledge-intensive approaches and traditional approaches. The first group overweight the second one. The first methods depend on ontology and enhance all CBR processes including case representation, retrieval, storage, and adaptation. By using a proposed set of qualitative metrics, the existing methods based on ontology for case representation are studied and evaluated in details. All these systems have limitations. No approach exceeds 53% of the specified metrics. The results of the survey explain the current limitations of CBR systems. It shows that ontology usage in case representation needs improvements to achieve semantic representation and semantic retrieval in CBR system. Keywords—Case based reasoning; Ontological case representation; Case retrieval; Clinical decision support system; Knowledge management", "title": "" }, { "docid": "417186e59f537a0f6480fc7e05eafb0c", "text": "Retrieving correct answers for non-factoid queries poses significant challenges for current answer retrieval methods. Methods either involve the laborious task of extracting numerous features or are ineffective for longer answers. We approach the task of non-factoid question answering using deep learning methods without the need of feature extraction. Neural networks are capable of learning complex relations based on relatively simple features which make them a prime candidate for relating non-factoid questions to their answers. In this paper, we show that end to end training with a Bidirectional Long Short Term Memory (BLSTM) network with a rank sensitive loss function results in significant performance improvements over previous approaches without the need for combining additional models.", "title": "" }, { "docid": "57a48dee2cc149b70a172ac5785afc6c", "text": "We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ~ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.", "title": "" }, { "docid": "1593fd6f9492adc851c709e3dd9b3c5f", "text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.", "title": "" }, { "docid": "e0205caf2ca2bc0727f541e236d849ef", "text": "This paper is aimed at modelling of a distinct smart charging station for electric vehicles (EVs) that is suitable for DC quick EV charging while ensuring minimum stress on the power grid. Operation of the charging station is managed in such a way that it is either supplied by photovoltaic (PV) power or the power grid, and the vehicle-to-grid (V2G) is also implemented for improving the stability of the grid during peak load hours. The PV interfaced DC/DC converter and grid interfaced DC/AC bidirectional converter share a DC bus. A smooth transition of one operating mode to another demonstrates the effectiveness of the employed control strategy. Modelling and control of the different components are explained and are implemented in Simulink. Simulations illustrate the feasible behaviour of the charging station under all operating modes in terms of the four-way interaction among PV, EVs and the grid along with V2G operation. Additionally, a business model is discussed with comprehensive analysis of cost estimation for the deployment of charging facilities in a residential area. It has been recognized that EVs bring new opportunities in terms of providing regulation services and consumption flexibility by varying the recharging power at a certain time instant. The paper also discusses the potential financial incentives required to inspire EV owners for active participation in the demand response mechanism.", "title": "" }, { "docid": "d49d405fc765b647b39dc9ef1b4d6ba9", "text": "The World Wide Web plays an important role while searching for information in the data network. Users are constantly exposed to an ever-growing flood of information. Our approach will help in searching for the exact user relevant content from multiple search engines thus, making the search more efficient and reliable. Our framework will extract the relevant result records based on two approaches i.e. Stored URL list and Run time Generated URL list. Finally, the unique set of records is displayed in a common framework's search result page. The extraction is performed using the concepts of Document Object Model (DOM) tree. The paper comprises of a concept of threshold and data filters to detect and remove irrelevant & redundant data from the web page. The data filters will also be used to further improve the similarity check of data records. Our system will be able to extract 75%-80% user relevant content by eliminating noisy content from the different structured web pages like blogs, forums, articles etc. in the dynamic environment. Our approach shows significant advantages in both precision and recall.", "title": "" }, { "docid": "5664d82c9b372e14d7fd8f28b9bdff35", "text": "The perceived resolution of matrix displays increases when the relative position of the color subpixels is taken into account. ‘Subpixel rendering’ algorithms are being used to convert an input image to subpixel-corrected display images. This paper deals with the consequences of the subpixel structure, and the theoretical background of the resolution gain. We will show that this theory allows a low-cost implementation in an image scaler. This leads to high flexibility, allowing different subpixel arrangements and a simple control over the trade-off between perceived resolution and color errors.", "title": "" }, { "docid": "46921a173ee1ed2a379da869060637d4", "text": "Given a table of data, existing systems can often detect basic atomic types (e.g., strings vs. numbers) for each column. A new generation of data-analytics and data-preparation systems are starting to automatically recognize rich semantic types such as date-time, email address, etc., for such metadata can bring an array of benefits including better table understanding, improved search relevance, precise data validation, and semantic data transformation. However, existing approaches only detect a limited number of types using regular-expression-like patterns, which are often inaccurate, and cannot handle rich semantic types such as credit card and ISBN numbers that encode semantic validations (e.g., checksum).\n We developed AUTOTYPE from open-source repositories like GitHub. Users only need to provide a set of positive examples for a target data type and a search keyword, our system will automatically identify relevant code, and synthesize type-detection functions using execution traces. We compiled a benchmark with 112 semantic types, out of which the proposed system can synthesize code to detect 84 such types at a high precision. Applying the synthesized type-detection logic on web table columns have also resulted in a significant increase in data types discovered compared to alternative approaches.", "title": "" }, { "docid": "70f370cd540a1386e7ce824f7a632746", "text": "As deep learning models are applied to increasingly diverse and complex problems, a key bottleneck is gathering enough highquality training labels tailored to each task. Users therefore turn to weak supervision, relying on imperfect sources of labels like user-defined heuristics and pattern matching. Unfortunately, with weak supervision, users have to design different labeling sources for each task. This process can be both time consuming and expensive: domain experts often perform repetitive steps like guessing optimal numerical thresholds and designing informative text patterns. To address these challenges, we present Reef, a system to automatically generate heuristics using a small labeled dataset to assign training labels to a large, unlabeled dataset in the weak supervision setting. Reef generates heuristics that each labels only the subset of the data it is accurate for, and iteratively repeats this process until the heuristics together label a large portion of the unlabeled data. We also develop a statistical measure that guarantees the iterative process will automatically terminate before it degrades training label quality. Compared to the best known user-defined heuristics developed over several days, Reef automatically generates heuristics in under five minutes and performs up to 9.74 F1 points better. In collaborations with users at several large corporations, research labs, Stanford Hospital and Clinics, and on open source text and image datasets, Reef outperforms other automated approaches like semi-supervised learning by up to 14.35 F1 points.", "title": "" }, { "docid": "7076523aea56d6d57388ccd66ece17dc", "text": "We present a system for automatic annotation of daily experience from multisensory streams on smartphones. Using smartphones as platform facilitates collection of naturalistic daily activity, which is difficult to collect with multiple on-body sensors or array of sensors affixed to indoor locations. However, recognizing daily activities in unconstrained settings is more challenging than in controlled environments: 1) multiples heterogeneous sensors equipped in smartphones are noisier, asynchronous, vary in sampling rates and can have missing data; 2) unconstrained daily activities are continuous, can occur concurrently, and have fuzzy onset and offset boundaries; 3) ground-truth labels obtained from the user’s self-report can be erroneous and accurate only in a coarse time scale. To handle these problems, we present in this paper a flexible framework for incorporating heterogeneous sensory modalities combined with state-of-the-art classifiers for sequence labeling. We evaluate the system with real-life data containing 11721 minutes of multisensory recordings, and demonstrate the accuracy and efficiency of the proposed system for practical lifelogging applications.", "title": "" }, { "docid": "59344cfe759a89a68e7bc4b0a5c971b1", "text": "A non-linear support vector machine (NLSVM) seizure classification SoC with 8-channel EEG data acquisition and storage for epileptic patients is presented. The proposed SoC is the first work in literature that integrates a feature extraction (FE) engine, patient specific hardware-efficient NLSVM classification engine, 96 KB SRAM for EEG data storage and low-noise, high dynamic range readout circuits. To achieve on-chip integration of the NLSVM classification engine with minimum area and energy consumption, the FE engine utilizes time division multiplexing (TDM)-BPF architecture. The implemented log-linear Gaussian basis function (LL-GBF) NLSVM classifier exploits the linearization to achieve energy consumption of 0.39 μ J/operation and reduces the area by 28.2% compared to conventional GBF implementation. The readout circuits incorporate a chopper-stabilized DC servo loop to minimize the noise level elevation and achieve noise RTI of 0.81 μ Vrms for 0.5-100 Hz bandwidth with an NEF of 4.0. The 5 × 5 mm (2) SoC is implemented in a 0.18 μm 1P6M CMOS process consuming 1.83 μ J/classification for 8-channel operation. SoC verification has been done with the Children's Hospital Boston-MIT EEG database, as well as with a specific rapid eye-blink pattern detection test, which results in an average detection rate, average false alarm rate and latency of 95.1%, 0.94% (0.27 false alarms/hour) and 2 s, respectively.", "title": "" }, { "docid": "ef74392a9681d16b14970740cbf85191", "text": "We propose an efficient physics-based method for dexterous ‘real hand’ - ‘virtual object’ interaction in Virtual Reality environments. Our method is based on the Coulomb friction model, and we show how to efficiently implement it in a commodity VR engine for realtime performance. This model enables very convincing simulations of many types of actions such as pushing, pulling, grasping, or even dexterous manipulations such as spinning objects between fingers without restrictions on the objects' shapes or hand poses. Because it is an analytic model, we do not require any prerecorded data, in contrast to previous methods. For the evaluation of our method, we conduction a pilot study that shows that our method is perceived more realistic and natural, and allows for more diverse interactions. Further, we evaluate the computational complexity of our method to show real-time performance in VR environments.", "title": "" }, { "docid": "07c103f59f5a972227be29e69dd3d440", "text": "Manifold-valued datasets are widely encountered in many computer vision tasks. A non-linear analog of the PCA algorithm, called the Principal Geodesic Analysis (PGA) algorithm suited for data lying on Riemannian manifolds was reported in literature a decade ago. Since the objective function in the PGA algorithm is highly non-linear and hard to solve efficiently in general, researchers have proposed a linear approximation. Though this linear approximation is easy to compute, it lacks accuracy especially when the data exhibits a large variance. Recently, an alternative called the exact PGA was proposed which tries to solve the optimization without any linearization. For general Riemannian manifolds, though it yields a better accuracy than the original (linearized) PGA, for data that exhibit large variance, the optimization is not computationally efficient. In this paper, we propose an efficient exact PGA algorithm for constant curvature Riemannian manifolds (CCM-EPGA). The CCM-EPGA algorithm differs significantly from existing PGA algorithms in two aspects, (i) the distance between a given manifold-valued data point and the principal submanifold is computed analytically and thus no optimization is required as in the existing methods. (ii) Unlike the existing PGA algorithms, the descent into codimension-1 submanifolds does not require any optimization but is accomplished through the use of the Rimeannian inverse Exponential map and the parallel transport operations. We present theoretical and experimental results for constant curvature Riemannian manifolds depicting favorable performance of the CCM-EPGA algorithm compared to existing PGA algorithms. We also present data reconstruction from the principal components which has not been reported in literature in this setting.", "title": "" }, { "docid": "8e3ec22c60c9df59570d1781cf03c627", "text": "We examine the recent move from a rhetoric of “users” toward one of “makers,” “crafters,” and “hackers” within HCI discourse. Through our analysis, we make several contributions. First, we provide a general overview of the structure and common framings within research on makers. We discuss how these statements reconfigure themes of empowerment and progress that have been central to HCI rhetoric since the field's inception. In the latter part of the article, we discuss the consequences of these shifts for contemporary research problems. In particular, we explore the problem of designed obsolescence, a core issue for Sustainable Interaction Design (SID) research. We show how the framing of the maker, as an empowered subject, presents certain opportunities and limitations for this research discourse. Finally, we offer alternative framings of empowerment that can expand maker discourse and its use in contemporary research problems such as SID.", "title": "" }, { "docid": "32bf3e0ce6f9bc8864bd905ffebcfcce", "text": "BACKGROUND AND PURPOSE\nTo improve the accuracy of early postonset prediction of motor recovery in the flaccid hemiplegic arm, the effects of change in motor function over time on the accuracy of prediction were evaluated, and a prediction model for the probability of regaining dexterity at 6 months was developed.\n\n\nMETHODS\nIn 102 stroke patients, dexterity and paresis were measured with the Action Research Arm Test, Motricity Index, and Fugl-Meyer motor evaluation. For model development, 23 candidate determinants were selected. Logistic regression analysis was used for prognostic factors and model development.\n\n\nRESULTS\nAt 6 months, some dexterity in the paretic arm was found in 38%, and complete functional recovery was seen in 11.6% of the patients. Total anterior circulation infarcts, right hemisphere strokes, homonymous hemianopia, visual gaze deficit, visual inattention, and paresis were statistically significant related to a poor arm function. Motricity Index leg scores of at least 25 points in the first week and Fugl-Meyer arm scores of 11 points in the second week increasing to 19 points in the fourth week raised the probability of developing some dexterity (Action Research Arm Test >or=10 points) from 74% (positive predictive value [PPV], 0.74; 95% confidence interval [CI], 0.63 to 0.86) to 94% (PPV, 0.83; 95% CI, 0.76 to 0.91) at 6 months. No change in probabilities of prediction dexterity was found after 4 weeks.\n\n\nCONCLUSIONS\nBased on the Fugl-Meyer scores of the flaccid arm, optimal prediction of arm function outcome at 6 months can be made within 4 weeks after onset. Lack of voluntary motor control of the leg in the first week with no emergence of arm synergies at 4 weeks is associated with poor outcome at 6 months.", "title": "" }, { "docid": "20c3addef683da760967df0c1e83f8e3", "text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.", "title": "" }, { "docid": "809b6723fde22640a8e09dd50442653f", "text": "Matching pursuits is a well known technique for signal representation and has also been used as a feature extractor for some classification systems. However, applications that use matching pursuits (MP) algorithm in their feature extraction stage are quite problem domain specific, making their adaptation for other types of problems quite hard. In this paper we propose a matching pursuits based similarity measure that uses only the dictionary, coefficients and residual information provided by the MP algorithm while comparing two signals. Hence it is easily applicable to a variety of problems. We show that using the MP based similarity measure for competitive agglomerative fuzzy clustering leads to an interesting and novel update equation that combines the standard fuzzy prototype updating equation with a term involving the error between approximated signals and approximated prototypes. The potential value of the similarity measure is investigated using the fuzzy k-nearest prototype algorithm of Frigui for a two-class, signal classification problem. It is shown that the new similarity measure significantly outperforms the Euclidean distance.", "title": "" }, { "docid": "5b748e2bc26e3fab531f0f741f7de176", "text": "Computer models are widely used to simulate real processes. Within the computer model, there always exist some parameters which are unobservable in the real process but need to be specified in the computer model. The procedure to adjust these unknown parameters in order to fit the model to observed data and improve its predictive capability is known as calibration. In traditional calibration, once the optimal calibration parameter set is obtained, it is treated as known for future prediction. Calibration parameter uncertainty introduced from estimation is not accounted for. We will present a Bayesian calibration approach for stochastic computer models. We account for these additional uncertainties and derive the predictive distribution for the real process. Two numerical examples are used to illustrate the accuracy of the proposed method.", "title": "" } ]
scidocsrr
4aadf675f8afb3ec90e3c383948bd4ea
The need to belong: desire for interpersonal attachments as a fundamental human motivation.
[ { "docid": "4825ada359be4788a52f1fd616142a19", "text": "Attachment theory is extended to pertain to developmental changes in the nature of children's attachments to parents and surrogate figures during the years beyond infancy, and to the nature of other affectional bonds throughout the life cycle. Various types of affectional bonds are examined in terms of the behavioral systems characteristic of each and the ways in which these systems interact. Specifically, the following are discussed: (a) the caregiving system that underlies parents' bonds to their children, and a comparison of these bonds with children's attachments to their parents; (b) sexual pair-bonds and their basic components entailing the reproductive, attachment, and caregiving systems; (c) friendships both in childhood and adulthood, the behavioral systems underlying them, and under what circumstances they may become enduring bonds; and (d) kinship bonds (other than those linking parents and their children) and why they may be especially enduring.", "title": "" } ]
[ { "docid": "dde5083017c2db3ffdd90668e28bab4b", "text": "Current industry standards for describing Web Services focus on ensuring interoperability across diverse platforms, but do not provide a good foundation for automating the use of Web Services. Representational techniques being developed for the Semantic Web can be used to augment these standards. The resulting Web Service specifications enable the development of software programs that can interpret descriptions of unfamiliar Web Services and then employ those services to satisfy user goals. OWL-S (“OWL for Services”) is a set of notations for expressing such specifications, based on the Semantic Web ontology language OWL. It consists of three interrelated parts: a profile ontology, used to describe what the service does; a process ontology and corresponding presentation syntax, used to describe how the service is used; and a grounding ontology, used to describe how to interact with the service. OWL-S can be used to automate a variety of service-related activities involving service discovery, interoperation, and composition. A large body of research on OWL-S has led to the creation of many open-source tools for developing, reasoning about, and dynamically utilizing Web Services.", "title": "" }, { "docid": "a1fb87b94d93da7aec13044d95ee1e44", "text": "Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, “reinforced self-attention (ReSA)”, for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called “reinforced sequence sampling (RSS)”, selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, “reinforced self-attention network (ReSAN)”, solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.", "title": "" }, { "docid": "b04f42415573e0ada85afcf7f419a3ae", "text": "Numerous embedding models have been recently explored to incorporate semantic knowledge into visual recognition. Existing methods typically focus on minimizing the distance between the corresponding images and texts in the embedding space but do not explicitly optimize the underlying structure. Our key observation is that modeling the pairwise image-image relationship improves the discrimination ability of the embedding model. In this paper, we propose the structured discriminative and difference constraints to learn visual-semantic embeddings. First, we exploit the discriminative constraints to capture the intraand inter-class relationships of image embeddings. The discriminative constraints encourage separability for image instances of different classes. Second, we align the difference vector between a pair of image embeddings with that of the corresponding word embeddings. The difference constraints help regularize image embeddings to preserve the semantic relationships among word embeddings. Extensive evaluations demonstrate the effectiveness of the proposed structured embeddings for single-label classification, multilabel classification, and zero-shot recognition.", "title": "" }, { "docid": "50840b0308e1f884b61c9f824b1bf17f", "text": "The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem --- both scheduling and assignment of filters to processors --- as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.", "title": "" }, { "docid": "3cda12f3efaf872571b4f98bf988f129", "text": "Human drivers use nonverbal communication and anticipation of other drivers' actions to master conflicts occurring in everyday driving situations. Without a high penetration of vehicle-to-vehicle communication an autonomous vehicle has to have the possibility to understand intentions of others and share own intentions with the surrounding traffic participants. This paper proposes a cooperative combinatorial motion planning algorithm without the need for inter vehicle communication based on Monte Carlo Tree Search (MCTS). We motivate why MCTS is particularly suited for the autonomous driving domain. Furthermore, adoptions to the MCTS algorithm are presented as for example simultaneous decisions, the usage of the Intelligent Driver Model as microscopic traffic simulation, and a cooperative cost function. We further show simulation results of merging scenarios in highway-like situations to underline the cooperative nature of the approach.", "title": "" }, { "docid": "2e4a3f77d0b8c31600fca0f1af82feb5", "text": "Forwarding data in scenarios where devices have sporadic connectivity is a challenge. An example scenario is a disaster area, where forwarding information generated in the incident location, like victims’ medical data, to a coordination point is critical for quick, accurate and coordinated intervention. New applications are being developed based on mobile devices and wireless opportunistic networks as a solution to destroyed or overused communication networks. But the performance of opportunistic routing methods applied to emergency scenarios is unknown today. In this paper, we compare and contrast the efficiency of the most significant opportunistic routing protocols through simulations in realistic disaster scenarios in order to show how the different characteristics of an emergency scenario impact in the behaviour of each one of them. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "19a1f9c9f3dec6f90d08479f0669d0dc", "text": "We present a multi-stream bi-directional recurrent neural network for fine-grained action detection. Recently, twostream convolutional neural networks (CNNs) trained on stacked optical flow and image frames have been successful for action recognition in videos. Our system uses a tracking algorithm to locate a bounding box around the person, which provides a frame of reference for appearance and motion and also suppresses background noise that is not within the bounding box. We train two additional streams on motion and appearance cropped to the tracked bounding box, along with full-frame streams. Our motion streams use pixel trajectories of a frame as raw features, in which the displacement values corresponding to a moving scene point are at the same spatial position across several frames. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. We show that our bi-directional LSTM network utilizes about 8 seconds of the video sequence to predict an action label. We test on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we introduce and make available to the community with this paper. The results demonstrate that our method significantly outperforms state-of-the-art action detection methods on both datasets.", "title": "" }, { "docid": "5a91b2d8611b14e33c01390181eb1891", "text": "Rapidly expanding volume of publications in the biomedical domain makes it increasingly difficult for a timely evaluation of the latest literature. That, along with a push for automated evaluation of clinical reports, present opportunities for effective natural language processing methods. In this study we target the problem of named entity recognition, where texts are processed to annotate terms that are relevant for biomedical studies. Terms of interest in the domain include gene and protein names, and cell lines and types. Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP). We trained context-aware token embeddings on a dataset of biomedical papers using ELMo, and incorporated these embeddings in the LSTM-CRF model used by AllenNLP for named entity recognition. We show these representations improve named entity recognition for different types of biomedical named entities. We also achieve a new state of the art in gene mention detection on the BioCreative II gene mention shared task.", "title": "" }, { "docid": "a39d7490a353f845da616a06eedbb211", "text": "The explosive growth in online information is making it harder for large, globally distributed organizations to foster collaboration and leverage their intellectual assets. Recently, there has been a growing interest in the development of next generation knowledge management systems focussing on the artificial intelligence based technologies. We propose a generic knowledge management system architecture based on ADIPS (agent-based distributed information processing system) framework. This contributes to the stream of research on intelligent KM system to supports the creation, acquisition, management, and sharing of information that is widely distributed over a network system. It will benefit the users through the automatic provision of timely and relevant information with minimal effort to search for that information. Ontologies which stand out as a keystone of new generation of multiagent information systems, are used for the purpose of structuring the resources. This framework provides personalized information delivery, identifies items of interest to user proactively and enables unwavering management of distributed intellectual assets.", "title": "" }, { "docid": "3743ce90fe49731fb055aebbcb5a3cd5", "text": "Users of today’s popular wide-area apps (e.g., Twitter, Google Docs, and Words with Friends) must no longer save and reload when updating shared data; instead, these applications are reactive, providing the illusion of continuous synchronization across mobile devices and the cloud. Achieving this illusion poses a complex distributed data management problem for programmers. This paper presents the first reactive data management service, called Diamond, which provides persistent cloud storage, reliable synchronization between storage and mobile devices, and automated execution of application code in response to shared data updates. We demonstrate that Diamond greatly simplifies the design of reactive applications, strengthens distributed data sharing guarantees, and supports automated reactivity with low performance overhead.", "title": "" }, { "docid": "89dea4ec4fd32a4a61be184d97ae5ba6", "text": "In this paper, we propose Generative Adversarial Network (GAN) architectures that use Capsule Networks for image-synthesis. Based on the principal of positionalequivariance of features, Capsule Network’s ability to encode spatial relationships between the features of the image helps it become a more powerful critic in comparison to Convolutional Neural Networks (CNNs) used in current architectures for image synthesis. Our proposed GAN architectures learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GANs and its variants that use CNNs. Apart from analyzing the quantitative results corresponding the images generated by different architectures, we also explore the reasons for the lower coverage and diversity explored by the GAN architectures that use CNN critics.", "title": "" }, { "docid": "1a6e9229f6bc8f6dc0b9a027e1d26607", "text": "− This work illustrates an analysis of Rogowski coils for power applications, when operating under non ideal measurement conditions. The developed numerical model, validated by comparison with other methods and experiments, enables to investigate the effects of the geometrical and constructive parameters on the measurement behavior of the coil.", "title": "" }, { "docid": "cded1e50c211f6912efa7f9a63ffd5a7", "text": "With the proliferation of e-commerce, a large part of online shopping is attributed to impulse buying. Hence, there is a particular necessity to understand impulse buying in the online context. Impulse shoppers incline to feel unable to control their tendencies and behaviors from various stimuli. Specifically, online consumers are both the impulse shoppers and the system users of websites in the purchase process. Impulse shoppers concern individual traits and system users cover the attributes of online stores. Online impulse buying therefore entails two key drivers, technology use and trust belief, and the mediator of flow experience. Grounding on flow experience, technology-use features, and trust belief, this study", "title": "" }, { "docid": "7cebca85b555c6312f14cfa90fb1b50b", "text": "This paper describes a new evolutionary algorithm that is especially well suited to AI-Assisted Game Design. The approach adopted in this paper is to use observations of AI agents playing the game to estimate the game's quality. Some of best agents for this purpose are General Video Game AI agents, since they can be deployed directly on a new game without game-specific tuning; these agents tend to be based on stochastic algorithms which give robust but noisy results and tend to be expensive to run. This motivates the main contribution of the paper: the development of the novel N-Tuple Bandit Evolutionary Algorithm, where a model is used to estimate the fitness of unsampled points and a bandit approach is used to balance exploration and exploitation of the search space. Initial results on optimising a Space Battle game variant suggest that the algorithm offers far more robust results than the Random Mutation Hill Climber and a Biased Mutation variant, which are themselves known to offer competitive performance across a range of problems. Subjective observations are also given by human players on the nature of the evolved games, which indicate a preference towards games generated by the N-Tuple algorithm.", "title": "" }, { "docid": "4cc4a6644e367afacee006fdb9f5e68a", "text": "A lifetime optimization methodology for planning the inspection and repair of structures that deteriorate over time is introduced and illustrated through numerical examples. The optimization is based on minimizing the expected total life-cycle cost while maintaining an allowable lifetime reliability for the structure. This method incorporates: (a) the quality of inspection techniques with different detection capabilities; (b) all repair possibilities based on an event tree; (c) the effects of aging, deterior~ti~m: an~ subsequent. r~~air on structural reliability; and (d) the time value of money. The overall cost to be minimized Includes the initial cost and the costs of preventive maintenance, inspection, repair, and failure. The methodology is illustrated using the reinforced concrete T-girders from a highway bridge. An optimum inspection/repair strategy is deve~~ped for these girders that are deteriorating due to corrosion in an aggressive environment. The effect of cntlcal. pa­ rameters such as rate of corrosion, quality of the inspection technique, and the expected cost of structural fallure are all investigated, along with the effects of both uniform and nonuniform inspection time intervals. Ultimately, the reliability-based lifetime approach to developing an optimum inspection/repair strategy demonstrates the potential for cost savings and improved efficiency. INTRODUCTION The management of the nation's infrastructure is a vitally important function of government. The inspection and repair of the transportation network is needed for uninterrupted com­ merce and a functioning economy. With about 600,000 high­ way bridges in the national inventory, the maintenance of these structures alone represents a commitment of billions of dollars annually. In fact, the nation spends at least $5,000,000,000 per year for highway bridge design, construction, replacement, and rehabilitation (Status 1993). Given this huge investment along with an increasing scarcity of resources, it is essential that the funds be used as efficiently as possible. Highway bridges deteriorate over time and need mainte­ nance/inspection programs that detect damage, deterioration, loss of effective strength in members, missing fasteners, frac­ tures, and cracks. Bridge serviceability is highly dependent on the frequency and quality of these maintenance programs. Be­ cause the welfare of many people depends on the health of the highway system, it is important that these bridges be main­ tained and inspected routinely. An efficient bridge maintenance program requires careful planning base~ on potenti~ modes of failure of the structural elements, the history of major struc­ tural repairs done to the bridge, and, of course, the frequency and intensity of the applied loads. Effective maintenal1;ce/in­ spection can extend the life expectancy of a system while re­ ducing the possibility of costly failures in the future. In any bridge, there are many defects that may appear dur­ ing a projected service period, such as potholes in the deck, scour on the piers, or the deterioration of joints or bearings. Corrosion of steel reinforcement, initiated by high chloride concentrations in the concrete, is a serious cause of degrada­ tion in concrete structures (Ting 1989). The corrosion damage is revealed by the initiation and propagation of cracks, which can be detected and repaired by scheduled maintenance and inspection procedures. As a result, the reliability of corrosive 'Prof., Dept. of Civ., Envir., and Arch. Engrg., Univ. of Colorado, Boulder, CO 80309-0428. 'Proj. Mgr., Chung-Shen Inst. of Sci. and Techno\\., Taiwan, Republic of China; formerly, Grad. Student, Dept. of Civ., Envir., and Arch. Engrg., Univ. of Colorado, Boulder, CO. 'Grad. Siudent, Dept. of Civ., Envir., and Arch. Engrg., Univ. of Col­ orado. Boulder, CO. critical structures depends not only on the structural design, but also on the inspection and repair procedures. This paper proposes a method to optimize the lifetime inspection/repair strategy of corrosion-critical concrete struc­ tures based on the reliability of the structure and cost-effec­ tiveness. The method is applicable for any type of damage whose evolution can be modeled over time. The reliability­ based analysis of structures, with or without maintenance/in­ spection procedures, is attracting the increased attention of re­ searchers (Thoft-Christensen and Sr6rensen 1987; Mori and Ellingwood 1994a). The optimal lifetime inspection/repair strategy is obtained by minimizing the expected total life-cycle cost while satisfying the constraints on the allowable level of structural lifetime reliability in service. The expected total life­ cycle cost includes the initial cost and the costs of preventive maintenance, inspection, repair, and failure. MAINTENANCEIINSPECTION For many bridges, both preventive and repair maintenance are typically performed. Preventive or routine maintenance in­ cludes replacing small parts, patching concrete, repairing cracks, changing lubricants, and cleaning and painting expo~ed parts. The structure is kept in working condition by delaymg and mitigating the aging effects of wear, fatigue, and related phenomena. In contrast, repair maintenance m~gh~ inclu~e re­ placing a bearing, resurfacing a deck, or modlfymg. a girder. Repair maintenance tends to be less frequent, reqUlres more effort, is usually more costly, and results in a measurable in­ crease in reliability. A sample maintenance strategy is shown in Fig. 1, where T l , T2 , T3 , and T4 represent the times of repair maintenance, and effort is a generic quantity that reflects cost, amount of work performed, and benefit derived from the main­ tenance. While guidance for routine maintenance exists, many repair maintenance strategies are based on experience and local ~rac­ tice rather than on sound theoretical investigations. Mamte-", "title": "" }, { "docid": "dbec1cf4a0904af336e0c75c211f49b7", "text": "BACKGROUND\nBoron neutron capture therapy (BNCT) is based on the nuclear reaction that occurs when boron-10 is irradiated with low-energy thermal neutrons to yield high linear energy transfer alpha particles and recoiling lithium-7 nuclei. Clinical interest in BNCT has focused primarily on the treatment of high-grade gliomas and either cutaneous primaries or cerebral metastases of melanoma, most recently, head and neck and liver cancer. Neutron sources for BNCT currently are limited to nuclear reactors and these are available in the United States, Japan, several European countries, and Argentina. Accelerators also can be used to produce epithermal neutrons and these are being developed in several countries, but none are currently being used for BNCT.\n\n\nBORON DELIVERY AGENTS\nTwo boron drugs have been used clinically, sodium borocaptate (Na(2)B(12)H(11)SH) and a dihydroxyboryl derivative of phenylalanine called boronophenylalanine. The major challenge in the development of boron delivery agents has been the requirement for selective tumor targeting to achieve boron concentrations ( approximately 20 microg/g tumor) sufficient to deliver therapeutic doses of radiation to the tumor with minimal normal tissue toxicity. Over the past 20 years, other classes of boron-containing compounds have been designed and synthesized that include boron-containing amino acids, biochemical precursors of nucleic acids, DNA-binding molecules, and porphyrin derivatives. High molecular weight delivery agents include monoclonal antibodies and their fragments, which can recognize a tumor-associated epitope, such as epidermal growth factor, and liposomes. However, it is unlikely that any single agent will target all or even most of the tumor cells, and most likely, combinations of agents will be required and their delivery will have to be optimized.\n\n\nCLINICAL TRIALS\nCurrent or recently completed clinical trials have been carried out in Japan, Europe, and the United States. The vast majority of patients have had high-grade gliomas. Treatment has consisted first of \"debulking\" surgery to remove as much of the tumor as possible, followed by BNCT at varying times after surgery. Sodium borocaptate and boronophenylalanine administered i.v. have been used as the boron delivery agents. The best survival data from these studies are at least comparable with those obtained by current standard therapy for glioblastoma multiforme, and the safety of the procedure has been established.\n\n\nCONCLUSIONS\nCritical issues that must be addressed include the need for more selective and effective boron delivery agents, the development of methods to provide semiquantitative estimates of tumor boron content before treatment, improvements in clinical implementation of BNCT, and a need for randomized clinical trials with an unequivocal demonstration of therapeutic efficacy. If these issues are adequately addressed, then BNCT could move forward as a treatment modality.", "title": "" }, { "docid": "5a397012744d958bb1a69b435c73e666", "text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.", "title": "" }, { "docid": "6a2544c5c52b08e70e0d0e2696f41017", "text": "This first textbook on formal concept analysis gives a systematic presentation of the mathematical foundations and their relations to applications in computer science, especially in Before we only the expression is required for basic law. In case fcbo can understand frege's system so. Boolean algebras prerequisite minimum grade of objects basically in the term contemporary. The following facts many are biconditionals such from the name of can solely. But one distinguishes indefinite and c2 then its underlying theory of the comprehension principle. The membership sign here is in those already generated for functions view. Comprehension principle of the point if and attributes let further. A set of height two sentences, are analyzed from the connections between full reconstruction. Frege later in the discussion so, that direction of logic laws governing cardinal number. This theorem by the open only, over concepts. Where is a relation names or, using this means of his attitude among. With our example we define xis an ancestor of mathematics.", "title": "" }, { "docid": "60b21a7b9f0f52f48ae2830db600fa24", "text": "The multi-armed bandit problem for a gambler is to decide which arm of a K-slot machine to pull to maximize his total reward in a series of trials. Many real-world learning and optimization problems can be modeled in this way. Several strategies or algorithms have been proposed as a solution to this problem in the last two decades, but, to our knowledge, there has been no common evaluation of these algorithms. This paper provides a preliminary empirical evaluation of several multiarmed bandit algorithms. It also describes and analyzes a new algorithm, Poker (Price Of Knowledge and Estimated Reward) whose performance compares favorably to that of other existing algorithms in several experiments. One remarkable outcome of our experiments is that the most naive approach, the -greedy strategy, proves to be often hard to beat.", "title": "" }, { "docid": "1de30db68b41c0e29320397ca464bb75", "text": "In software development, bug reports provide crucial information to developers. However, these reports widely differ in their quality. We conducted a survey among developers and users of APACHE, ECLIPSE, and MOZILLA to find out what makes a good bug report.\n The analysis of the 466 responses revealed an information mismatch between what developers need and what users supply. Most developers consider steps to reproduce, stack traces, and test cases as helpful, which are at the same time most difficult to provide for users. Such insight is helpful to design new bug tracking tools that guide users at collecting and providing more helpful information.\n Our CUEZILLA prototype is such a tool and measures the quality of new bug reports; it also recommends which elements should be added to improve the quality. We trained CUEZILLA on a sample of 289 bug reports, rated by developers as part of the survey. In our experiments, CUEZILLA was able to predict the quality of 31--48% of bug reports accurately.", "title": "" } ]
scidocsrr
4d9dd5696657fb17c9fca974a8cce5d5
A review of induction motors signature analysis as a medium for faults detection
[ { "docid": "afa70058c6df7b85040ce40be752bb89", "text": "The authors attempt to identify the various causes of stator and rotor failures in three-phase squirrel cage induction motors. A specific methodology is proposed to facilitate an accurate analysis of these failures. It is noted that, due to the destructive nature of most failures, it is not easy, and is sometimes impossible, to determine the primary cause of failure. By a process of elimination, one can usually be assured of properly identifying the most likely cause of the failure. It is pointed out that the key point in going through this process of elimination is to use the basic steps of analyzing the failure class and pattern, noting the general motor appearance, identifying the operating condition at the time of failure, and gaining knowledge of the past history of the motor and application.<<ETX>>", "title": "" } ]
[ { "docid": "4ee078123815eff49cc5d43550021261", "text": "Generalized anxiety and major depression have become increasingly common in the United States, affecting 18.6 percent of the adult population. Mood disorders can be debilitating, and are often correlated with poor general health, life dissatisfaction, and the need for disability benefits due to inability to work. Recent evidence suggests that some mood disorders have a circadian component, and disruptions in circadian rhythms may even trigger the development of these disorders. However, the molecular mechanisms of this interaction are not well understood. Polymorphisms in a circadian clock-related gene, PER3, are associated with behavioral phenotypes (extreme diurnal preference in arousal and activity) and sleep/mood disorders, including seasonal affective disorder (SAD). Here we show that two PER3 mutations, a variable number tandem repeat (VNTR) allele and a single-nucleotide polymorphism (SNP), are associated with diurnal preference and higher Trait-Anxiety scores, supporting a role for PER3 in mood modulation. In addition, we explore a potential mechanism for how PER3 influences mood by utilizing a comprehensive circadian clock model that accurately predicts the changes in circadian period evident in knock-out phenotypes and individuals with PER3-related clock disorders.", "title": "" }, { "docid": "8f3b28c1b271652136ac43f420e92dc3", "text": "In this paper, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although convolutional neural networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve the CNN-based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark data sets demonstrate our method yields the state-of-the-art performance with competitive inference time.11Our source code is available at https://github.com/wenguanwang/deepattention.", "title": "" }, { "docid": "15080420974928809fdf774acfdc200a", "text": "Let F be a distribution in D′ and let f be a locally summable function. The composition F f x of F and f is said to exist and be equal to the distribution h x if the limit of the sequence {Fn f x } is equal to h x , where Fn x F x ∗ δn x for n 1, 2, . . . and {δn x } is a certain regular sequence converging to the Dirac delta function. It is proved that the neutrix composition δ rs−1 tanhx 1/r exists and δ rs−1 tanhx 1/r ∑s−1 k 0 ∑Kk i 0 −1 cs−2i−1,k rs !/2sk! δ k x for r, s 1, 2, . . ., where Kk is the integer part of s − k − 1 /2 and the constants cj,k are defined by the", "title": "" }, { "docid": "2d0cc17115692f1e72114c636ba74811", "text": "A new inline coupling topology for narrowband helical resonator filters is proposed that allows to introduce selectively located transmission zeros (TZs) in the stopband. We show that a pair of helical resonators arranged in an interdigital configuration can realize a large range of in-band coupling coefficient values and also selectively position a TZ in the stopband. The proposed technique dispenses the need for auxiliary elements, so that the size, complexity, power handling and insertion loss of the filter are not compromised. A second order prototype filter with dimensions of the order of 0.05λ, power handling capability up to 90 W, measured insertion loss of 0.18 dB and improved selectivity is presented.", "title": "" }, { "docid": "3132ed8b0f2e257c3e9e8b0a716cd72c", "text": "Auditory evoked potentials were recorded from the vertex of subjects who listened selectively to a series of tone pips in one ear and ignored concurrent tone pips in the other ear. The negative component of the evoked potential peaking at 80 to 110 milliseconds was substantially larger for the attended tones. This negative component indexed a stimulus set mode of selective attention toward the tone pips in one ear. A late positive component peaking at 250 to 400 milliseconds reflected the response set established to recognize infrequent, higher pitched tone pips in the attended series.", "title": "" }, { "docid": "95356b2f25acbceeb49f1c3a7cfd94cc", "text": "BACKGROUND\nChronic gastritis is one of the most common findings at upper endoscopy in the general population, and chronic atrophic gastritis is epidemiologically associated with the occurrence of gastric cancer. However, the current status of diagnosis and treatment of chronic gastritis in China is unclear.\n\n\nMETHODS\nA multi-center national study was performed; all patients who underwent diagnostic upper endoscopy for evaluation of gastrointestinal symptoms from 33 centers were enrolled. Data including sex, age, symptoms and endoscopic findings were prospectively recorded.\n\n\nRESULTS\nTotally 8892 patients were included. At endoscopy, 4389, 3760 and 1573 patients were diagnosed to have superficial gastritis, erosive gastritis, and atrophic gastritis, respectively. After pathologic examination, it is found that atrophic gastritis, intestinal metaplasia and dysplasia were prevalent, which accounted for 25.8%, 23.6% and 7.3% of this patient population. Endoscopic features were useful for predicting pathologic atrophy (PLR = 4.78), but it was not useful for predicting erosive gastritis. Mucosal-protective agents and PPI were most commonly used medications for chronic gastritis.\n\n\nCONCLUSIONS\nThe present study suggests non-atrophic gastritis is the most common endoscopic finding in Chinese patients with upper GI symptoms. Precancerous lesions, including atrophy, intestinal metaplasia and dysplasia are prevalent in Chinese patients with chronic gastritis, and endoscopic features are useful for predicting pathologic atrophy.", "title": "" }, { "docid": "00b98536f0ecd554442a67fb31f77f4c", "text": "We use a large, nationally-representative sample of working-age adults to demonstrate that personality (as measured by the Big Five) is stable over a four-year period. Average personality changes are small and do not vary substantially across age groups. Intra-individual personality change is generally unrelated to experiencing adverse life events and is unlikely to be economically meaningful. Like other non-cognitive traits, personality can be modeled as a stable input into many economic decisions. JEL classi cation: J3, C18.", "title": "" }, { "docid": "ffa772cd54dbd80f3ef8048febe72f12", "text": "We present a system for the detection of small and potentially obscured obstacles in vegetated terrain. The key novelty of this system is the coupling of a volumetric occupancy map with a 3D Convolutional Neural Network (CNN), which to the best of our knowledge has not been previously done. This architecture allows us to train an extremely efficient and highly accurate system for detection tasks from raw occupancy data. We apply this method to the problem of detecting safe landing zones for autonomous helicopters from LiDAR point clouds. Current methods for this problem rely on heuristic rules and use simple geometric features. These heuristics break down in the presence of low vegetation, as they do not distinguish between vegetation that may be landed on and solid objects that should be avoided. We evaluate the system with a combination of real and synthetic range data. We show our system outperforms various benchmarks, including a system integrating various hand-crafted point cloud features from the literature.", "title": "" }, { "docid": "95119801d8b4fece726cb62dae2ff192", "text": "Constructing 3D structures from serial section data is a long standing problem in microscopy. The structure of a fiber reinforced composite material can be reconstructed using a tracking-by-detection model. Tracking-by-detection algorithms rely heavily on detection accuracy, especially the recall performance. The state-of-the-art fiber detection algorithms perform well under ideal conditions, but are not accurate where there are local degradations of image quality, due to contaminants on the material surface and/or defocus blur. Convolutional Neural Networks (CNN) could be used for this problem, but would require a large number of manual annotated fibers, which are not available. We propose an unsupervised learning method to accurately detect fibers on the large scale, that is robust against local degradations of image quality. The proposed method does not require manual annotations, but uses fiber shape/size priors and spatio-temporal consistency in tracking to simulate the supervision in the training of the CNN. Experiments show significant improvements over state-of-the-art fiber detection algorithms together with advanced tracking performance.", "title": "" }, { "docid": "ad58f12cc1cb9b77b57cfc9bf52859db", "text": "During the past few years, cloud computing has become a key IT buzzword. Although the definition of cloud computing is still “cloudy”, the trade press and bloggers label many vendors as cloud computing vendors, and report on their services and issues. Cloud computing is in its infancy in terms of market adoption. However, it is a key IT megatrend that will take root. This article reviews its definition and status, adoption issues, and provides a glimpse of its future and discusses technical issues that are expected to be addressed.", "title": "" }, { "docid": "366b3d17f49b7460aef5b2255c8dacdd", "text": "We give a theoretical and experimental analysis of the generalization error of cross validation using two natural measures of the problem under consideration. The approximation rate measures the accuracy to which the target function can be ideally approximated as a function of the number of parameters, and thus captures the complexity of the target function with respect to the hypothesis model. The estimation rate measures the deviation between the training and generalization errors as a function of the number of parameters, and thus captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of the simplest form of cross validation. The bound clearly shows the dangers of making the fraction of data saved for testingtoo large or too small. By optimizing the bound with respect to , we then argue that the following qualitative properties of cross-validation behavior should be quite robust to significant changes in the underlying model selection problem: When the target function complexity is small compared to the sample size, the performance of cross validation is relatively insensitive to the choice of . The importance of choosing optimally increases, and the optimal value for decreases, as the target function becomes more complex relative to the sample size. There is nevertheless a single fixed value for that works nearly optimally for a wide range of target function complexity.", "title": "" }, { "docid": "04384b62c17f9ff323db4d51bea86fe9", "text": "Imbalanced data widely exist in many high-impact applications. An example is in air traffic control, where among all three types of accident causes, historical accident reports with ‘personnel issues’ are much more than the other two types (‘aircraft issues’ and ‘environmental issues’) combined. Thus, the resulting data set of accident reports is highly imbalanced. On the other hand, this data set can be naturally modeled as a network, with each node representing an accident report, and each edge indicating the similarity of a pair of accident reports. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representations for imbalanced networks. To bridge this gap, in this paper, we first propose Vertex-Diminished Random Walk (VDRW) for imbalanced network analysis. It is significantly different from the existing Vertex Reinforced Random Walk by discouraging the random particle to return to the nodes that have already been visited. This design is particularly suitable for imbalanced networks as the random particle is more likely to visit the nodes from the same class, which is a desired property for learning node representations. Furthermore, based on VDRW, we propose a semi-supervised network representation learning framework named ImVerde for imbalanced networks, where context sampling uses VDRW and the limited label information to create node-context pairs, and balanced-batch sampling adopts a simple under-sampling method to balance these pairs from different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms stateof-the-art algorithms for learning network representations from imbalanced data.", "title": "" }, { "docid": "3afa34f0420e422cfe1b3d61abad5e7f", "text": "One of the many challenges in designing autonomy for operation in uncertain and dynamic environments is the planning of collision-free paths. Roadmap-based motion planning is a popular technique for identifying collision-free paths, since it approximates the often infeasible space of all possible motions with a networked structure of valid configurations. We use stochastic reachable sets to identify regions of low collision probability, and to create roadmaps which incorporate likelihood of collision. We complete a small number of stochastic reachability calculations with individual obstacles a priori. This information is then associated with the weight, or preference for traversal, given to a transition in the roadmap structure. Our method is novel, and scales well with the number of obstacles, maintaining a relatively high probability of reaching the goal in a finite time horizon without collision, as compared to other methods. We demonstrate our method on systems with up to 50 dynamic obstacles.", "title": "" }, { "docid": "a1554514d8288ba569608659fa93ab03", "text": "Smart antennas have received increasing interest for mitigating interference in the multiple-input-multiple-output (MIMO) wireless local area network (WLAN). In this paper, a dual-band dual-polarized compact bowtie dipole antenna array is proposed to support anti-interference MIMO WLAN applications. In the antenna array, there are 12 antennas, six for horizontal polarization and six for vertical polarization. In order to achieve dual linear polarizations and beam switching, six horizontal antennas are placed in a sequential, rotating arrangement on a horizontal substrate panel with an equal inclination angle of 60 ° to form a symmetrical structure, while the other six antennas for vertical polarization are inserted through slots made on the horizontal substrate panel. Furthermore, six pairs of meandered slits are introduced to reduce the mutual coupling between horizontal antennas in the lower band. A prototype of the array with a dimension of 150 × 150 × 60 mm3 is manufactured and exhibits the characteristics of high isolation, good front-to-back ratio, and average gains of 4.5 and 5 dBi over the 2.4- and 5-GHz band, respectively. The MIMO performance of the array is analyzed and evaluated by mutual coupling, the total active reflection coefficient (TARC) and the envelope correlation coefficient. The anti-interference capability of the array is also investigated by the experiment.", "title": "" }, { "docid": "183530436c47abec9897152dc4a0aad9", "text": "Previous research has proposed that tests enhance retention more than do restudy opportunities because they promote the effectiveness of mediating information--that is, a word or concept that links a cue to a target (Pyc & Rawson, 2010). Although testing has been shown to promote retention of mediating information that participants were asked to generate, it is unknown what type of mediators are spontaneously activated during testing and how these contribute to later retention. In the current study, participants learned cue-target pairs through testing (e.g., Mother: _____) or restudying (e.g., Mother: Child) and were later tested on these items in addition to a never-before-presented item that was strongly associated with the cue (e.g., Father)--that is, the semantic mediator. Compared with participants who learned the items through restudying, those who learned the items through testing exhibited higher false alarm rates to semantic mediators on a final recognition test (Experiment 1) and were also more likely to recall the correct target from the semantic mediator on a final cued recall test (Experiment 2). These results support the mediator effectiveness hypothesis and demonstrate that semantically related information may be 1 type of natural mediator that is activated during testing.", "title": "" }, { "docid": "38f85257dabb2b9a876a6ac30d5cd758", "text": "The issue of the depletion of oil reserves in the world, and the problem of air pollution produced by motor vehicles, motivate many researchers to seek alternative energy sources to propel the vehicle. One promising way is to replace combustion motor with an electric motor, which is known as an electric vehicle. First stages of this research is to model the flow of power in the electric vehicle energy system to obtain its characteristics. Power flow efficiency in", "title": "" }, { "docid": "7b283d4b267c84a8d4b7e74e0957ccc1", "text": "The multimodal properties of the human somatosensory system continue to be unravelled. There is mounting evidence that one of these submodalities-touch-has another dimension, providing not only its well-recognized discriminative input to the brain, but also an affective input. It has long been recognized that touch plays an important role in many forms of social communication and a number of theories have been proposed to explain observations and beliefs about the \"power of touch.\" Here, we propose that a class of low-threshold mechanosensitive C fibers that innervate the hairy skin represent the neurobiological substrate for the affective and rewarding properties of touch.", "title": "" }, { "docid": "a212ba02d2546ee33e42fe26f4b05295", "text": "The requirement to operate aircraft in GPS-denied environments can be met by using visual odometry. Aiming at a full-scale aircraft equipped with a high-accuracy inertial navigation system (INS), the proposed method combines vision and the INS for odometry estimation. With such an INS, the aircraft orientation is accurate with low drift, but it contains high-frequency noise that can affect the vehicle motion estimation, causing position estimation to drift. Our method takes the INS orientation as input and estimates translation. During motion estimation, the method virtually rotates the camera by reparametrizing features with their depth direction perpendicular to the ground. This partially eliminates error accumulation in motion estimation caused by the INS high-frequency noise, resulting in a slow drift. We experiment on two hardware configurations in the acquisition of depth for the visual features: 1) the height of the aircraft above the ground is measured by an altimeter assuming that the imaged ground is a local planar patch, and 2) the depth map of the ground is registered with a two-dimensional laser in a push-broom configuration. The method is tested with data collected from a full-scale helicopter. The accumulative flying distance for the overall tests is approximately 78 km. We observe slightly better accuracy with the push-broom laser than the altimeter. C © 2015 Wiley Periodicals, Inc.", "title": "" } ]
scidocsrr
3fa6600520434c3ae2af1d3a238ec5c4
Data Ingestion for the Connected World
[ { "docid": "2c6b5867299f26194fa8b1d5ac0b0997", "text": "Large-scale internet services aim to remain highly available and responsive in the presence of unexpected failures. Providing this service often requires monitoring and analyzing tens of millions of measurements per second across a large number of systems, and one particularly effective solution is to store and query such measurements in a time series database (TSDB). A key challenge in the design of TSDBs is how to strike the right balance between efficiency, scalability, and reliability. In this paper we introduce Gorilla, Facebook’s inmemory TSDB. Our insight is that users of monitoring systems do not place much emphasis on individual data points but rather on aggregate analysis, and recent data points are of much higher value than older points to quickly detect and diagnose the root cause of an ongoing problem. Gorilla optimizes for remaining highly available for writes and reads, even in the face of failures, at the expense of possibly dropping small amounts of data on the write path. To improve query efficiency, we aggressively leverage compression techniques such as delta-of-delta timestamps and XOR’d floating point values to reduce Gorilla’s storage footprint by 10x. This allows us to store Gorilla’s data in memory, reducing query latency by 73x and improving query throughput by 14x when compared to a traditional database (HBase)backed time series data. This performance improvement has unlocked new monitoring and debugging tools, such as time series correlation search and more dense visualization tools. Gorilla also gracefully handles failures from a single-node to entire regions with little to no operational overhead.", "title": "" }, { "docid": "c5e37e68f7a7ce4b547b10a1888cf36f", "text": "SciDB [4, 3] is a new open-source data management system intended primarily for use in application domains that involve very large (petabyte) scale array data; for example, scientific applications such as astronomy, remote sensing and climate modeling, bio-science information management, risk management systems in financial applications, and the analysis of web log data. In this talk we will describe our set of motivating examples and use them to explain the features of SciDB. We then briefly give an overview of the project 'in flight', explaining our novel storage manager, array data model, query language, and extensibility frameworks.", "title": "" }, { "docid": "b67acf80642aa2ba8ba01c362303857c", "text": "Storm has long served as the main platform for real-time analytics at Twitter. However, as the scale of data being processed in real-time at Twitter has increased, along with an increase in the diversity and the number of use cases, many limitations of Storm have become apparent. We need a system that scales better, has better debug-ability, has better performance, and is easier to manage -- all while working in a shared cluster infrastructure. We considered various alternatives to meet these needs, and in the end concluded that we needed to build a new real-time stream data processing system. This paper presents the design and implementation of this new system, called Heron. Heron is now the de facto stream data processing engine inside Twitter, and in this paper we also share our experiences from running Heron in production. In this paper, we also provide empirical evidence demonstrating the efficiency and scalability of Heron.", "title": "" } ]
[ { "docid": "ad7852de8e1f80c68417c459d8a12e15", "text": "Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients – a key element in generative adversarial network training – using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.", "title": "" }, { "docid": "c668dd96bbb4247ad73b178a7ba1f921", "text": "Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and supportvector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotionrelated natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "21c6b467cb10ac6db8693663470f8822", "text": "Spine movements play an important role in quadrupedal locomotion, yet their potential benefits in locomotion of quadruped robots have not been systematically explored. In this work, we investigate the role of spinal joint actuation and compliance on the bounding performance of a simulated compliant quadruped robot. We designed and conducted extensive simulation experiments, to compare the benefits of different spine designs, and in particular, we compared the bounding performance when (i) using actuated versus passive spinal joint, (ii) changing the stiffness of the spinal joint and (iii) altering joint actuation profiles. We used a detailed rigid body dynamics modeling to capture the main dynamical features of the robot. We applied a set of analytic tools to evaluate the bounding gait characteristics including periodicity, stability, and cost of transport. A stochastic optimization method called particle swarm optimization was implemented to perform a global search over the parameter space, and extract a pool of diverse gait solutions. Our results show improvements in bounding speed for decreasing spine stiffness, both in the passive and the actuated case. The results also suggests that for the passive spine configuration at low stiffness values, periodic solutions are hard to realize. Overall, passive spine solutions were more energy Electronic supplementary material The online version of this article (doi:10.1007/s10514-015-9540-2) contains supplementary material, which is available to authorized users. B Soha Pouya soha.pouya@epfl.ch 1 Biorobotics Laboratory, Institute of Bioengineering, Ecole Polytechnique Federale de Lausanne (EPFL), 1015 Lausanne, Switzerland efficient and self-stable than actuated ones, but they basically exist in limited regions of parameter space. Applying more complex joint control profiles reduced the dependency of the robot’s speed to its chosen spine stiffness. In average, active spine control decreased energy efficiency and self-stability behavior, in comparison to a passive compliant spine setup.", "title": "" }, { "docid": "9eafc698b64e6042d8e7a23c9b2cce0c", "text": "Though convolutional neural networks have achieved stateof-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding humanimperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. To further improve the transferability, we (1) integrate the recently proposed momentum method into the attack process; and (2) attack an ensemble of networks simultaneously. By evaluating our method against top defense submissions and official baselines from NIPS 2017 adversarial competition, this enhanced attack reaches an average success rate of 73.0%, which outperforms the top 1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in future. The code is public available at https://github.com/cihangxie/DI-2-FGSM.", "title": "" }, { "docid": "13a9329bdd46ba243003090bf219a20a", "text": "Visual art represents a powerful resource for mental and physical well-being. However, little is known about the underlying effects at a neural level. A critical question is whether visual art production and cognitive art evaluation may have different effects on the functional interplay of the brain's default mode network (DMN). We used fMRI to investigate the DMN of a non-clinical sample of 28 post-retirement adults (63.71 years ±3.52 SD) before (T0) and after (T1) weekly participation in two different 10-week-long art interventions. Participants were randomly assigned to groups stratified by gender and age. In the visual art production group 14 participants actively produced art in an art class. In the cognitive art evaluation group 14 participants cognitively evaluated artwork at a museum. The DMN of both groups was identified by using a seed voxel correlation analysis (SCA) in the posterior cingulated cortex (PCC/preCUN). An analysis of covariance (ANCOVA) was employed to relate fMRI data to psychological resilience which was measured with the brief German counterpart of the Resilience Scale (RS-11). We observed that the visual art production group showed greater spatial improvement in functional connectivity of PCC/preCUN to the frontal and parietal cortices from T0 to T1 than the cognitive art evaluation group. Moreover, the functional connectivity in the visual art production group was related to psychological resilience (i.e., stress resistance) at T1. Our findings are the first to demonstrate the neural effects of visual art production on psychological resilience in adulthood.", "title": "" }, { "docid": "070a1de608a35cddb69b84d5f081e94d", "text": "Identifying potentially vulnerable locations in a code base is critical as a pre-step for effective vulnerability assessment; i.e., it can greatly help security experts put their time and effort to where it is needed most. Metric-based and pattern-based methods have been presented for identifying vulnerable code. The former relies on machine learning and cannot work well due to the severe imbalance between non-vulnerable and vulnerable code or lack of features to characterize vulnerabilities. The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities. In this paper, we propose and implement a generic, lightweight and extensible framework, LEOPARD, to identify potentially vulnerable functions through program metrics. LEOPARD requires no prior knowledge about known vulnerabilities. It has two steps by combining two sets of systematically derived metrics. First, it uses complexity metrics to group the functions in a target application into a set of bins. Then, it uses vulnerability metrics to rank the functions in each bin and identifies the top ones as potentially vulnerable. Our experimental results on 11 real-world projects have demonstrated that, LEOPARD can cover 74.0% of vulnerable functions by identifying 20% of functions as vulnerable and outperform machine learning-based and static analysis-based techniques. We further propose three applications of LEOPARD for manual code review and fuzzing, through which we discovered 22 new bugs in real applications like PHP, radare2 and FFmpeg, and eight of them are new vulnerabilities.", "title": "" }, { "docid": "1cceffd9ef0281f89fb6b7efd5d03371", "text": "We report compact and wideband 90° hybrid with a one-way tapered 4×4 MMI waveguide. The fabricated device with a device length of 198 µm exhibited a phase deviation of &#60;±5.4° over a 70-nm-wide spectral range.", "title": "" }, { "docid": "ec189ac55b64402d843721de4fc1f15c", "text": "DroidMiner is a new malicious Android app detection system that uses static analysis to automatically mine malicious program logic from known Android malware. DroidMiner uses a behavioral graph to abstract malware program logic into a sequence of threat modalities, and then applies machine-learning techniques to identify and label elements of the graph that match harvested threat modalities. Once trained on a mobile malware corpus, DroidMiner can automatically scan a new Android app to (i) determine whether it contains malicious modalities, (ii) diagnose the malware family to which it is most closely associated, and (iii) precisely characterize behaviors found within the analyzed app. While DroidMiner is not the first to attempt automated classification of Android applications based on Framework API calls, it is distinguished by its development of modalities that are resistant to noise insertions and its use of associative rule mining that enables automated association of malicious behaviors with modalities. We evaluate DroidMiner using 2,466 malicious apps, identified from a corpus of over 67,000 third-party market Android apps, plus an additional set of over 10,000 official market Android apps. Using this set of real-world apps, DroidMiner achieves a 95.3% detection rate, with a 0.4% false positive rate. We further evaluate DroidMiner’s ability to classify malicious apps under their proper family labels, and measure its label accuracy at 92%.", "title": "" }, { "docid": "dbd11235f7b6b515f672b06bb10ebc3d", "text": "Until recently job seeking has been a tricky, tedious and time consuming process, because people looking for a new position had to collect information from many different sources. Job recommendation systems have been proposed in order to automate and simplify this task, also increasing its effectiveness. However, current approaches rely on scarce manually collected data that often do not completely reveal people skills. Our work aims to find out relationships between jobs and people skills making use of data from LinkedIn users’ public profiles. Semantic associations arise by applying Latent Semantic Analysis (LSA). We use the mined semantics to obtain a hierarchical clustering of job positions and to build a job recommendation system. The outcome proves the effectiveness of our method in recommending job positions. Anyway, we argue that our approach is definitely general, because the extracted semantics could be worthy not only for job recommendation systems but also for recruiting systems. Furthermore, we point out that both the hierarchical clustering and the recommendation system do not require parameters to be tuned.", "title": "" }, { "docid": "53b609fa483c698e1e2934c013da0d62", "text": "As patients live longer after cancer diagnosis and treatment, attention to symptoms and quality of life (QoL) are of increasing importance both during treatment and throughout survivorship. Two complications of multi-modal cancer treatment that can profoundly affect both men and women are sexual dysfunction and infertility. Survivors at highest risk for treatment-related sexual dysfunction are those with tumors that involve the sexual or pelvic organs and those whose treatment affects the hormonal systems mediating sexual function. Sexual dysfunction may not abate without appropriate intervention. Therefore, early identification and treatment strategies are essential. Likewise, multiple factors contribute to the risk of infertility from cancer treatment and many cancer patients of reproductive age would prefer to maintain their fertility, if possible. Fortunately, advances in reproductive technology have created options for young newly diagnosed patients to preserve their ability to have a biologic child. This paper will focus on the sexual and reproductive problems encountered by cancer survivors and discuss some treatment options.", "title": "" }, { "docid": "b34216c34f32336db67f76f1c94c255b", "text": "Exploration is still one of the crucial problems in reinforcement learning, especially for agents acting in safety-critical situations. We propose a new directed exploration method, based on a notion of state controlability. Intuitively, if an agent wants to stay safe, it should seek out states where the effects of its actions are easier to predict; we call such states more controllable. Our main contribution is a new notion of controlability, computed directly from temporaldifference errors. Unlike other existing approaches of this type, our method scales linearly with the number of state features, and is directly applicable to function approximation. Our method converges to correct values in the policy evaluation setting. We also demonstrate significantly faster learning when this exploration strategy is used in large control problems.", "title": "" }, { "docid": "37a108b2d30a08cb78321f96c1e9eca4", "text": "The TRAM flap, DIEP flap, and gluteal free flaps are routinely used for breast reconstruction. However, these have seldom been described for reconstruction of buttock deformities. We present three cases of free flaps used to restore significant buttock contour deformities. They introduce vascularised bulky tissue and provide adequate cushioning for future sitting, as well as correction of the aesthetic defect.", "title": "" }, { "docid": "bb8b6d2424ef7709aa1b89bc5d119686", "text": "We have applied a Long Short-Term Memory neural network to model S&P 500 volatility, incorporating Google domestic trends as indicators of the public mood and macroeconomic factors. In a held-out test set, our Long Short-Term Memory model gives a mean absolute percentage error of 24.2%, outperforming linear Ridge/Lasso and autoregressive GARCH benchmarks by at least 31%. This evaluation is based on an optimal observation and normalization scheme which maximizes the mutual information between domestic trends and daily volatility in the training set. Our preliminary investigation shows strong promise for better predicting stock behavior via deep learning and neural network models.", "title": "" }, { "docid": "18d28769691fb87a6ebad5aae3eae078", "text": "The current head Injury Assessment Reference Values (IARVs) for the child dummies are based in part on scaling adult and animal data and on reconstructions of real world accident scenarios. Reconstruction of well-documented accident scenarios provides critical data in the evaluation of proposed IARV values, but relatively few accidents are sufficiently documented to allow for accurate reconstructions. This reconstruction of a well documented fatal-fall involving a 23-month old child supplies additional data for IARV assessment. The videotaped fatal-fall resulted in a frontal head impact onto a carpet-covered cement floor. The child suffered an acute right temporal parietal subdural hematoma without skull fracture. The fall dynamics were reconstructed in the laboratory and the head linear and angular accelerations were quantified using the CRABI-18 Anthropomorphic Test Device (ATD). Peak linear acceleration was 125 ± 7 g (range 114-139), HIC15 was 335 ± 115 (Range 257-616), peak angular velocity was 57± 16 (Range 26-74), and peak angular acceleration was 32 ± 12 krad/s 2 (Range 15-56). The results of the CRABI-18 fatal fall reconstruction were consistent with the linear and rotational tolerances reported in the literature. This study investigates the usefulness of the CRABI-18 anthropomorphic testing device in forensic investigations of child head injury and aids in the evaluation of proposed IARVs for head injury. INTRODUCTION Defining the mechanisms of injury and the associated tolerance of the pediatric head to trauma has been the focus of a great deal of research and effort. In contrast to the multiple cadaver experimental studies of adult head trauma published in the literature, there exist only a few experimental studies of infant head injury using human pediatric cadaveric tissue [1-6]. While these few studies have been very informative, due to limitations in sample size, experimental equipment, and study objectives, current estimates of the tolerance of the pediatric head are based on relatively few pediatric cadaver data points combined with the use of scaled adult and animal data. In effort to assess and refine these tolerance estimates, a number of researchers have performed detailed accident reconstructions of well-documented injury scenarios [7-11] . The reliability of the reconstruction data are predicated on the ability to accurately reconstruct the actual accident and quantify the result in a useful injury metric(s). These resulting injury metrics can then be related to the injuries of the child and this, when combined with other reliable reconstructions, can form an important component in evaluating pediatric injury mechanisms and tolerance. Due to limitations in case identification, data collection, and resources, relatively few reconstructions of pediatric accidents have been performed. In this study, we report the results of the reconstruction of an uncharacteristically well documented fall resulting in a fatal head injury of a 23 month old child. The case study was previously reported as case #5 by Plunkett [12]. BACKGROUND As reported by Plunkett (2001), A 23-month-old was playing on a plastic gym set in the garage at her home with her older brother. She had climbed the attached ladder to the top rail above the platform and was straddling the rail, with her feet 0.70 meters (28 inches) above the floor. She lost her balance and fell headfirst onto a 1-cm (3⁄8-inch) thick piece of plush carpet remnant covering the concrete floor. She struck the carpet first with her outstretched hands, then with the right front side of her forehead, followed by her right shoulder. Her grandmother had been watching the children play and videotaped the fall. She cried after the fall but was alert", "title": "" }, { "docid": "eb4284f45dfe66e4195de12d13f2decc", "text": "An entry of X is denoted by Xi1,...,id where each index iμ ∈ {1, . . . , nμ} refers to the μth mode of the tensor for μ = 1, . . . , d. For simplicity, we will assume that X has real entries, but it is of course possible to define complex tensors or, more generally, tensors over arbitrary fields. A wide variety of applications lead to problems where the data or the desired solution can be represented by a tensor. In this survey, we will focus on tensors that are induced by the discretization of a multivariate function; we refer to the survey [169] and to the books [175, 241] for the treatment of tensors containing observed data. The simplest way a given multivariate function f(x1, x2, . . . , xd) on a tensor product domain Ω = [0, 1] leads to a tensor is by sampling f on a tensor grid. In this case, each entry of the tensor contains the function value at the corresponding position in the grid. The function f itself may, for example, represent the solution to a high-dimensional partial differential equation (PDE). As the order d increases, the number of entries in X increases exponentially for constant n = n1 = · · · = nd. This so called curse of dimensionality prevents the explicit storage of the entries except for very small values of d. Even for n = 2, storing a tensor of order d = 50 would require 9 petabyte! It is therefore essential to approximate tensors of higher order in a compressed scheme, for example, a low-rank tensor decomposition. Various such decompositions have been developed, see Section 2. An important difference to tensors containing observed data, a tensor X induced by a function is usually not given directly but only as the solution of some algebraic equation, e.g., a linear system or eigenvalue problem. This requires the development of solvers for such equations working within the compressed storage scheme. Such algorithms are discussed in Section 3. The range of applications of low-rank tensor techniques is quickly expanding. For example, they have been used for addressing:", "title": "" }, { "docid": "760199e13c0c25022aed558923763b99", "text": "This paper presents a novel approach to automated sentence completion based on pointwise mutual information (PMI). Feature sets are created by fusing the various types of input provided to other classes of language models, ultimately allowing multiple sources of both local and distant information to be considered. Furthermore, it is shown that additional precision gains may be achieved by incorporating feature sets of higher-order n-grams. Experimental results demonstrate that the PMI model outperforms all prior models and establishes a new state-of-the-art result on the Microsoft Research Sentence Completion Challenge.", "title": "" }, { "docid": "1b625a1136bec100f459a39b9b980575", "text": "This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k non-zero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A strong sparse recovery result is proved for the truncated power method, and this theory is our key motivation for developing the new algorithm. The proposed method is tested on applications such as sparse principal component analysis and the densest k-subgraph problem. Extensive experiments on several synthetic and real-world data sets demonstrate the competitive empirical performance of our method.", "title": "" }, { "docid": "8808c5f8ce726a9382facc63f9460e21", "text": "With the booming of deep learning in the recent decade, deep neural network has achieved state-of-art performances on many machine learning tasks and has been applied to more and more research fields. Stock market prediction is an attractive research topic since the successful prediction on the market’s future movement leads to significant profit. In this thesis, we investigate to combine the conventional stock analysis techniques with the popular deep learning together and study the impact of deep neural network on stock market prediction. Traditional short term stock market predictions are usually based on the analysis of historical market data, such as stock prices, moving averages or daily returns. Whereas financial news also contains useful information on public companies and the market. In this thesis we apply the popular word embedding methods and deep neural networks to leverage financial news to predict stock price movements in the market. Experimental results have shown that our proposed methods are simple but very effective, which can significantly improve the stock prediction accuracy on a standard financial database over the baseline system using only the historical price information.", "title": "" }, { "docid": "f590eac54deff0c65732cf9922db3b93", "text": "Lichen planus (LP) is a common chronic inflammatory condition that can affect skin and mucous membranes, including the oral mucosa. Because of the anatomic, physiologic and functional peculiarities of the oral cavity, the oral variant of LP (OLP) requires specific evaluations in terms of diagnosis and management. In this comprehensive review, we discuss the current developments in the understanding of the etiopathogenesis, clinical-pathologic presentation, and treatment of OLP, and provide follow-up recommendations informed by recent data on the malignant potential of the disease as well as health economics evaluations.", "title": "" }, { "docid": "fa373f09456dbdb1222aaa9df10fd117", "text": "A scalable extension to the H.264/AVC video coding standard has been developed within the joint video team (JVT), a joint organization of the ITU-T video coding group (VCEG) and the ISO/IEC moving picture experts group (MPEG). The extension allows multiple resolutions of an image sequence to be contained in a single bit stream. In this paper, we introduce the spatially scalable extension within the resulting scalable video coding standard. The high-level design is described and individual coding tools are explained. Additionally, encoder issues are identified. Finally, the performance of the design is reported.", "title": "" } ]
scidocsrr
0985d354295e5fa6b1a5d54ee0b4b5f9
Potential field methods and their inherent limitations for mobile robot navigation
[ { "docid": "38382c04e7dc46f5db7f2383dcae11fb", "text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.", "title": "" }, { "docid": "2c4ee0d42347cf75096caec62dda97f3", "text": "A new real-time obstacle avoidance method for mobile robots has been developed and implemented. This method, named the Vector Field Hisfog\" (VFH), permits the detection of unknown obstacles and avoids collisions while simultaneously steering the mobile robot toward the target. A VFH-controlled mobile robot maneuvers quickly and without stopping among densely cluttered obstacles. The VFH method uses a two-dimensional Cartesian Histognun Gfid as a world model. This world model is updated continuously and in real-time with range data sampled by the onboard ultrasonic range sensors. Based on the accumulated environmental data, the VFH method then computes a one-dimensional Polar Histogram that is constructed around the robot's momentary location. Each sector in the Polar Histogram holds thepolar obstacle density in that direction. Finally, the algorithm selects the most suitable sector from among aU Polar Hisfogmi sectors with low obstacle density, and the steering of the robot is aligned with that direction. Experimental results from a mobile robot traversing a densely cluttered obstacle course at an average speed of 0.7 m/sec demonstrate the power of the VFH method.", "title": "" } ]
[ { "docid": "e0c832f48352a5cb107a41b0907ad707", "text": "In the same commercial ecosystem, although the different main bodies of logistics service such as transportation, suppliers and purchasers drive their interests differently, all the different stakeholders in the same business or consumers coexist mutually and share resources with each other. Based on this, this paper constructs a model of bonded logistics supply chain management based on the theory of commercial ecology, focusing on the logistics mode of transportation and multi-attribute behavior decision-making model based on the risk preference of the mode of transport of goods. After the weight is divided, this paper solves the model with ELECTRE-II algorithm and provides a scientific basis for decision-making of bonded logistics supply chain management through the decision model and ELECTRE-II algorithm.", "title": "" }, { "docid": "217a6fd7c99ae3277750a7644f05b817", "text": "In this paper, we discuss the use of ontologies for semantic interoperability and integration. We argue that information technology has evolved into a world of largely loosely coupled systems and as such, needs increasingly more explicit, machine-interpretable semantics. Ontologies in the form of logical domain theories and their knowledge bases offer the richest representations of machine-interpretable semantics for systems and databases in the loosely coupled world, thus ensuring greater semantic interoperability and integration. Finally, we discuss how ontologies support semantic interoperability in the real, commercial and governmental world.", "title": "" }, { "docid": "3e44a5c966afbeabff11b54bafcefdce", "text": "In this paper, we aim to compare empirically four initialization methods for the K-Means algorithm: random, Forgy, MacQueen and Kaufman. Although this algorithm is known for its robustness, it is widely reported in literature that its performance depends upon two key points: initial clustering and instance order. We conduct a series of experiments to draw up (in terms of mean, maximum, minimum and standard deviation) the probability distribution of the square-error values of the nal clusters returned by the K-Means algorithm independently on any initial clustering and on any instance order when each of the four initialization methods is used. The results of our experiments illustrate that the random and the Kauf-man initialization methods outperform the rest of the compared methods as they make the K-Means more eeective and more independent on initial clustering and on instance order. In addition, we compare the convergence speed of the K-Means algorithm when using each of the four initialization methods. Our results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initial-ization method.", "title": "" }, { "docid": "fc9061348b46fc1bf7039fa5efcbcea1", "text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.", "title": "" }, { "docid": "5fd0b013ee2778ac6328729566eb1481", "text": "As more and more virtual machines (VM) are packed into a physical machine, refactoring common kernel components shared by the virtual machines running on the same physical machine significantly reduces the overall resource consumption. A refactored kernel component typically runs on a special VM called a virtual appliance. Because of the semantics gap in Hardware Abstraction Layer (HAL)-based virtualization, a physical machine's virtual appliance requires the support of per-VM in-guest agents to perform VM-specific operations such as kernel data structure access and modification. To simplify deployment, these agents must be injected into guest virtual machines without requiring any manual installation. Moreover, it is essential to protect the integrity of in-guest agents at run time, especially when the underlying refactored kernel service is security-related. This paper describes the design, implementation and evaluation of a surreptitious kernel agent deployment and execution mechanism called SADE that requires zero installation effort and effectively hides the execution of agent code. To demonstrate the efficacy of SADE, we describe a signature-based memory scanning virtual appliance that uses SADE to inject its in-guest kernel agents without any support from the injected virtual machine, and show that both the start-up overhead and the run-time performance penalty of SADE are quite modest in practice.", "title": "" }, { "docid": "2f471c24ccb38e70627eba6383c003e0", "text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.", "title": "" }, { "docid": "c56c45405e0a943e63ab035b11b9fd93", "text": "We present a simple, but expressive type system that supports strong updates—updating a memory cell to hold values of unrelated types at different points in time. Our formulation is based upon a standard linear lambda calculus and, as a result, enjoys a simple semantic interpretation for types that is closely related to models for spatial logics. The typing interpretation is strong enough that, in spite of the fact that our core programming language supports shared, mutable references and cyclic graphs, every well-typed program terminates. We then consider extensions needed to model ML-style references, where the capability to access a reference cell is unrestricted, but strong updates are disallowed. Our extensions include a thaw primitive for re-gaining the capability to perform strong updates on unrestricted references. The thaw primitive is closely related to other mechanisms that support strong updates, such as CQUAL’s restrict.", "title": "" }, { "docid": "7a1541f523273fa0e8fbb53bf4259698", "text": "We derive the proper form of the Akaike information criterion for variable selection for mixture cure models, which are often fit via the expectation-maximization algorithm. Separate covariate sets may be used in the mixture components. The selection criteria are applicable to survival models for right-censored data with multiple competing risks and allow for the presence of an insusceptible group. The method is illustrated on credit loan data, with pre-payment and default as events and maturity as the insusceptible case and is used in a simulation study.", "title": "" }, { "docid": "62d93b9bcc66f402cd045f8586b0b62f", "text": "Passive crossbar resistive random access memory (RRAM) arrays require select devices with nonlinear I-V characteristics to address the sneak-path problem. Here, we present a systematical analysis to evaluate the performance requirements of select devices during the read operation of RRAM arrays for the proposed one-selector-one-resistor (1S1R) configuration with serially connected selector/storage element. We found high selector current density is critical and the selector nonlinearity (ON/OFF) requirement can be relaxed at present. Different read schemes were analyzed to achieve high read margin and low power consumption. Design optimizations of the sense resistance and the storage elements are also discussed.", "title": "" }, { "docid": "f7d05b0efbf8fbd46e0294585b6db97c", "text": "We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: <;uri xlink:href=\"http://live.ece.utexas.edu/research/fog/index.html\" xlink:type=\"simple\">http://live.ece.utexas.edu/research/fog/index.html<;/uri>.", "title": "" }, { "docid": "003e0146613ed3a781ea1e866128f2d9", "text": "Virtual characters are an important part of many 3D graphical simulations. In entertainment or training applications, virtual characters might be one of the main mechanisms for creating and developing content and scenarios. In such applications the user may need to interact with a number of different characters that need to invoke specific responses in the user, so that the user interprets the scenario in the way that the designer intended. Whilst representations of virtual characters have come a long way in recent years, interactive virtual characters tend to be a bit “wooden” with respect to their perceived behaviour. In this STAR we give an overview of work on expressive virtual characters. In particular, we assume that a virtual character representation is already available, and we describe a variety of models and methods that are used to give the characters more “depth” so that they are less wooden and more plausible. We cover models of individual characters’ emotion and personality, models of interpersonal behaviour and methods for generating expression.", "title": "" }, { "docid": "830723dc2be3c495ba366fcad9623da6", "text": "We present a new approach for neural machine translation (NMT) using the morphological and grammatical decomposition of the words (factors) in the output side of the neural network. This architecture addresses two main problems occurring in MT, namely dealing with a large target language vocabulary and the out of vocabulary (OOV) words. By the means of factors, we are able to handle larger vocabulary and reduce the training time (for systems with equivalent target language vocabulary size). In addition, we can produce new words that are not in the vocabulary. We use a morphological analyser to get a factored representation of each word (lemmas, Part of Speech tag, tense, person, gender and number). We have extended the NMT approach with attention mechanism (Bahdanau et al., 2014) in order to have two different outputs , one for the lemmas and the other for the rest of the factors. The final translation is built using some a priori linguistic information. We compare our extension with a word-based NMT system. The experiments, performed on the IWSLT’15 dataset translating from English to French, show that while the performance do not always increase, the system can manage a much larger vocabulary and consistently reduce the OOV rate. We observe up to 2% BLEU point improvement in a simulated out of domain translation setup.", "title": "" }, { "docid": "67755a3dd06b09f458d1ee013e18c8ef", "text": "Spiking neural networks are naturally asynchronous and use pulses to carry information. In this paper, we consider implementing such networks on a digital chip. We used an event-based simulator and we started from a previously established simulation, which emulates an analog spiking neural network, that can extract complex and overlapping, temporally correlated features. We modified this simulation to allow an easier integration in an embedded digital implementation. We first show that a four bits synaptic weight resolution is enough to achieve the best performance, although the network remains functional down to a 2 bits weight resolution. Then we show that a linear leak could be implemented to simplify the neurons leakage calculation. Finally, we demonstrate that an order-based STDP with a fixed number of potentiated synapses as low as 200 is efficient for features extraction. A simulation including these modifications, which lighten and increase the efficiency of digital spiking neural network implementation shows that the learning behavior is not affected, with a recognition rate of 98% in a cars trajectories detection application.", "title": "" }, { "docid": "7e25a158504f25e2f44a3021a3266d9d", "text": "We develop an algorithm for automatic discovery of precursors in time series data (ADOPT). In a time series setting, a precursor may be considered as any event that precedes and increases the likelihood of an adverse event. In a multivariate time series data, there are exponential number of events which makes a brute force search intractable. ADOPT works by breaking down the problem into two steps (1) inferring a model of the nominal time series (data without adverse event) by considering the nominal data to be generated by a hidden expert and (2) using the expert’s model as a benchmark to evaluate the adverse time series to identify suboptimal events as precursors. For step (1), we use a Markov Decision Process (MDP) framework where value functions and Bellman’s optimality are used to infer the expert’s actions. For step (2), we define a precursor score to evaluate a given instant of a time series by comparing its utility with that of the expert. Thus, the search for precursors is transformed to a search for sub-optimal action sequences in ADOPT. As an application case study, we use ADOPT to discover precursors to go-around events in commercial flights using real", "title": "" }, { "docid": "1979fa5a3384477602c0e81ba62199da", "text": "Language style transfer is the problem of migrating the content of a source sentence to a target style. In many of its applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles. Under this problem setting, we propose an encoder-decoder framework. First, each sentence is encoded into its content and style latent representations. Then, by recombining the content with the target style, we decode a sentence aligned in the target domain. To adequately constrain the encoding and decoding functions, we couple them with two loss functions. The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style. The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style. We validate the effectiveness of our model in three tasks: sentiment modification of restaurant reviews, dialog response revision with a romantic style, and sentence rewriting with a Shakespearean style.", "title": "" }, { "docid": "e6457f5257e95d727e06e212bef2f488", "text": "The emerging ability to comply with caregivers' dictates and to monitor one's own behavior accordingly signifies a major growth of early childhood. However, scant attention has been paid to the developmental course of self-initiated regulation of behavior. This article summarizes the literature devoted to early forms of control and highlights the different philosophical orientations in the literature. Then, focusing on the period from early infancy to the beginning of the preschool years, the author proposes an ontogenetic perspective tracing the kinds of modulation or control the child is capable of along the way. The developmental sequence of monitoring behaviors that is proposed calls attention to contributions made by the growth of cognitive skills. The role of mediators (e.g., caregivers) is also discussed.", "title": "" }, { "docid": "47d338ae9346653a6044a6178e2e3227", "text": "Detecting the objects in the video and tracking its motion to identify its characteristics has been emerging as a demanding research area in the domain of image processing and computer vision. It has applications of visual surveillance in real time tracking of interested objects, traffic monitoring etc. This paper presents a review on phases for video analysis i.e detection of moving objects of interest and tracking of such objects frame to frame. Generally visual surveillance can be classified into three phases of data processing: moving object recognition, object extraction & tracking and to extract temporal information about such objects. This literature presents the techniques available for detection and tracking, their fundamental study and comparative analysis of these techniques in visual surveillance. General Terms Moving object detection, Object tracking, object representation, visual surveillance.", "title": "" }, { "docid": "8aadbd4f7e91d3a9bd4ce13b22d302d1", "text": "The design and validation of a wireless monitoring system for dealing with wildlife road crossing problems is addressed. The wildlife detection procedure is based on the Doppler radar technology integrated in wireless sensor network devices. Such a solution tries to overcome the so-called habit effect arising with standard alert road-systems (e.g., static or flashing road signs) introducing the principle of real-time and event-based driver notification. To this end, the radar signal is locally processed by the wireless node to infer the target presence close to roadsides. In case of radar detection, the wireless node promptly transmits the collected information to the control unit, for data storage and further statistics. A prototype of the system has been deployed in a real test-site in Alps region for performance assessment. A selected set of preliminary results are here presented and discussed to show the capabilities of the proposed solution.", "title": "" }, { "docid": "218e80c55d0d184b5c699b3df7d3377d", "text": "In the state-of-the-art video-based smoke detection methods, the representation of smoke mainly depends on the visual information in the current image frame. In the case of light smoke, the original background can be still seen and may deteriorate the characterization of smoke. The core idea of this paper is to demonstrate the superiority of using smoke component for smoke detection. In order to obtain smoke component, a blended image model is constructed, which basically is a linear combination of background and smoke components. Smoke opacity which represents a weighting of the smoke component is also defined. Based on this model, an optimization problem is posed. An algorithm is devised to solve for smoke opacity and smoke component, given an input image and the background. The resulting smoke opacity and smoke component are then used to perform the smoke detection task. The experimental results on both synthesized and real image data verify the effectiveness of the proposed method.", "title": "" }, { "docid": "52be5bbccc0c4a840585dccc629e2412", "text": "A voltage scaling technique for energy-efficient operation requires an adaptive power-supply regulator to significantly reduce dynamic power consumption in synchronous digital circuits. A digitally controlled power converter that dynamically tracks circuit performance with a ring oscillator and regulates the supply voltage to the minimum required to operate at a desired frequency is presented. This paper investigates the issues involved in designing a fully digital power converter and describes a design fabricated in a MOSIS 0.8m process. A variable-frequency digital controller design takes advantage of the power savings available through adaptive supply-voltage scaling and demonstrates converter efficiency greater than 90% over a dynamic range of regulated voltage levels.", "title": "" } ]
scidocsrr
b21d05f75f01fa916090c0ae58f8a897
Aligning Gaussian-Topic with Embedding Network for Summarization Ranking
[ { "docid": "6021b5aa102fe910eb7428265c056fc8", "text": "Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21% for the Wikipedia dataset and 18% for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.", "title": "" } ]
[ { "docid": "4053bbaf8f9113bef2eb3b15e34a209a", "text": "With the recent availability of commodity Virtual Reality (VR) products, immersive video content is receiving a significant interest. However, producing high-quality VR content often requires upgrading the entire production pipeline, which is costly and time-consuming. In this work, we propose using video feeds from regular broadcasting cameras to generate immersive content. We utilize the motion of the main camera to generate a wide-angle panorama. Using various techniques, we remove the parallax and align all video feeds. We then overlay parts from each video feed on the main panorama using Poisson blending. We examined our technique on various sports including basketball, ice hockey and volleyball. Subjective studies show that most participants rated their immersive experience when viewing our generated content between Good to Excellent. In addition, most participants rated their sense of presence to be similar to ground-truth content captured using a GoPro Omni 360 camera rig.", "title": "" }, { "docid": "12819e1ad6ca9b546e39ed286fe54d23", "text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.", "title": "" }, { "docid": "0f15d9b4f82d76ac1162c2fddda8bd97", "text": "Detecting access to video streaming websites is the first step for an organization to regulate unwanted accesses to such sites by its employees. Adversaries often adopt circumvention techniques using proxy servers and Virtual Private Networks (VPNs) in order to avoid such detection. This paper presents a traffic analysis based technique that can detect such tunneled traffic at an organization's firewall using signatures found in traffic amount and timing in targeted video traffic. We present the detection results on the traffic data for several popular video streaming sites. Additional results are presented to validate the detection framework when detecting access to video streaming sites from a wide range of clients with a classifier trained with traffic data collected from a limited number of clients. The results show that the classifier works in both cases. It detects same-client traffic with high true positive rate, while it detects traffic from an unknown client with lower true positive rate but very low false positive rate. The results validate the effectiveness of traffic analysis based detection of video streaming sites.", "title": "" }, { "docid": "bb72e4d6f967fb88473756cdcbb04252", "text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.", "title": "" }, { "docid": "3731d3071b7447e888567c078e39bf80", "text": "Mixed-type categorical and numerical data are a challenge in many applications. This general area of mixed-type data is among the frontier areas, where computational intelligence approaches are often brittle compared with the capabilities of living creatures. In this paper, unsupervised feature learning (UFL) is applied to the mixed-type data to achieve a sparse representation, which makes it easier for clustering algorithms to separate the data. Unlike other UFL methods that work with homogeneous data, such as image and video data, the presented UFL works with the mixed-type data using fuzzy adaptive resonance theory (ART). UFL with fuzzy ART (UFLA) obtains a better clustering result by removing the differences in treating categorical and numeric features. The advantages of doing this are demonstrated with several real-world data sets with ground truth, including heart disease, teaching assistant evaluation, and credit approval. The approach is also demonstrated on noisy, mixed-type petroleum industry data. UFLA is compared with several alternative methods. To the best of our knowledge, this is the first time UFL has been extended to accomplish the fusion of mixed data types.", "title": "" }, { "docid": "1315247aa0384097f5f9e486bce09bd4", "text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.", "title": "" }, { "docid": "2a93b6b0d430327776faf6acf275062f", "text": "Remote photoplethysmography (rPPG) allows remote measurement of the heart rate using low-cost RGB imaging equipment. In this study, we review the development of the field of rPPG since its emergence in 2008. We also classify existing rPPG approaches and derive a framework that provides an overview of modular steps. Based on this framework, practitioners can use our classification to design algorithms for an rPPG approach that suits their specific needs. Researchers can use the reviewed and classified algorithms as a starting point to improve particular features of an rPPG algorithm.", "title": "" }, { "docid": "1623c4b3dad0caf250df0cbe32af3f63", "text": "This paper describes and evaluates a high-fidelity, low-cost haptic interface for tele-operation. The interface is a wearable vibrotactile glove containing miniature voice coils that provides continuous, proportional force information to the user's finger-tips. In psychophysical experiments, correlated variations in the frequency and amplitude of the stimulators extended the user's perceptual response range compared to varying amplitude or frequency alone. In an adaptive, force-limited, pick-and-place manipulation task, the interface allowed users to control the grip forces more effectively than no feedback or binary feedback, which produced equivalent performance. A sorting experiment established that proportional tactile feedback enhances the user's ability to discriminate the relative properties of objects, such as weight. We conclude that correlated amplitude and frequency signals, simulating force in a remote environment, substantially improve teleoperation.", "title": "" }, { "docid": "f784ffcdb63558f5f22fe90058853904", "text": "Stylometric analysis of prose is typically limited to classification tasks such as authorship attribution. Since the models used are typically black boxes, they give little insight into the stylistic differences they detect. In this paper, we characterize two prose genres syntactically: chick lit (humorous novels on the challenges of being a modern-day urban female) and high literature. First, we develop a top-down computational method based on existing literary-linguistic theory. Using an off-the-shelf parser we obtain syntactic structures for a Dutch corpus of novels and measure the distribution of sentence types in chick-lit and literary novels. The results show that literature contains more complex (subordinating) sentences than chick lit. Secondly, a bottom-up analysis is made of specific morphological and syntactic features in both genres, based on the parser’s output. This shows that the two genres can be distinguished along certain features. Our results indicate that detailed insight into stylistic differences can be obtained by combining computational linguistic analysis with literary theory.", "title": "" }, { "docid": "4249d40ff2bad24af73671af24f6f031", "text": "We present Charticulator, an interactive authoring tool that enables the creation of bespoke and reusable chart layouts. Charticulator is our response to most existing chart construction interfaces that require authors to choose from predefined chart layouts, thereby precluding the construction of novel charts. In contrast, Charticulator transforms a chart specification into mathematical layout constraints and automatically computes a set of layout attributes using a constraint-solving algorithm to realize the chart. It allows for the articulation of compound marks or glyphs as well as links between these glyphs, all without requiring any coding or knowledge of constraint satisfaction. Furthermore, thanks to the constraint-based layout approach, Charticulator can export chart designs into reusable templates that can be imported into other visualization tools. In addition to describing Charticulator's conceptual framework and design, we present three forms of evaluation: a gallery to illustrate its expressiveness, a user study to verify its usability, and a click-count comparison between Charticulator and three existing tools. Finally, we discuss the limitations and potentials of Charticulator as well as directions for future research. Charticulator is available with its source code at https://charticulator.com.", "title": "" }, { "docid": "f0bbf04880a84b9fc814af4ae2ef8867", "text": "BACKGROUND AND PURPOSE\nThe Very Early Nimodipine Use in Stroke (VENUS) trial was designed to test the hypothesis that early treatment with nimodipine has a positive effect on survival and functional outcome after stroke. This was suggested in a previous meta-analysis on the use of nimodipine in stroke. However, in a recent Cochrane review we were unable to reproduce these positive results. This led to the early termination of VENUS after an interim analysis.\n\n\nMETHODS\nIn this randomized, double-blind, placebo-controlled trial, treatment was started by general practitioners or neurologists within 6 hours after stroke onset (oral nimodipine 30 mg QID or identical placebo, for 10 days). Main analyses included comparisons of the primary end point (poor outcome, defined as death or dependency after 3 months) and secondary end points (neurological status and blood pressure 24 hours after inclusion, mortality after 10 days, and adverse events) between treatment groups. Subgroup analyses (on final diagnosis and based on the per-protocol data set) were performed.\n\n\nRESULTS\nAt trial termination, after inclusion of 454 patients (225 nimodipine, 229 placebo), no effect of nimodipine was found. After 3 months of follow-up, 32% (n=71) of patients in the nimodipine group had a poor outcome compared with 27% (n=62) in the placebo group (relative risk, 1.2; 95% CI, 0.9 to 1.6). A treatment effect was not found for secondary outcomes and in the subgroup analyses.\n\n\nCONCLUSIONS\nThe results of VENUS do not support the hypothesis of a beneficial effect of early nimodipine in stroke patients.", "title": "" }, { "docid": "34d16a5eb254846f431e2c716309e20a", "text": "AIM\nWe investigated the uptake and pharmacokinetics of l-ergothioneine (ET), a dietary thione with free radical scavenging and cytoprotective capabilities, after oral administration to humans, and its effect on biomarkers of oxidative damage and inflammation.\n\n\nRESULTS\nAfter oral administration, ET is avidly absorbed and retained by the body with significant elevations in plasma and whole blood concentrations, and relatively low urinary excretion (<4% of administered ET). ET levels in whole blood were highly correlated to levels of hercynine and S-methyl-ergothioneine, suggesting that they may be metabolites. After ET administration, some decreasing trends were seen in biomarkers of oxidative damage and inflammation, including allantoin (urate oxidation), 8-hydroxy-2'-deoxyguanosine (DNA damage), 8-iso-PGF2α (lipid peroxidation), protein carbonylation, and C-reactive protein. However, most of the changes were non-significant.\n\n\nINNOVATION\nThis is the first study investigating the administration of pure ET to healthy human volunteers and monitoring its uptake and pharmacokinetics. This compound is rapidly gaining attention due to its unique properties, and this study lays the foundation for future human studies.\n\n\nCONCLUSION\nThe uptake and retention of ET by the body suggests an important physiological function. The decreasing trend of oxidative damage biomarkers is consistent with animal studies suggesting that ET may function as a major antioxidant but perhaps only under conditions of oxidative stress. Antioxid. Redox Signal. 26, 193-206.", "title": "" }, { "docid": "e687452d18f07c048483d090d1666bb3", "text": "Online dating websites are popular platforms for adults to search for their life partners. Because on online dating websites, a user's profile image is an important factor determining other's impressions, we focus on profile images and analyze users' visual attractiveness in this study. Facial attractiveness is strongly related to our perception of aesthetics and therefore we believe our investigation can somewhat contribute to artwork analysis. We use pre-trained convolutional neural networks (CNN) to extract visual features and propose a new method to rank users' attractiveness from their online dating interactions. For both genders, we predict users' facial attractiveness by supervised machine learning. Our experimental results show that deep representations of profile images are powerful to capture facial attributes' differences and perform well in predicting users' attractiveness. The correlation coefficient of 0.462 for male users and the correlation coefficient of 0.387 for females users is obtained for regression. The accuracy of 75% for females and the accuracy of 78.8% for males is obtained for 2-level classification.", "title": "" }, { "docid": "a90f865e053b9339052a4d00281dbd03", "text": "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output &#x2013; point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.", "title": "" }, { "docid": "589396a7c9dae0567f0bcd4d83461a6f", "text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.", "title": "" }, { "docid": "b8e84a607622ec8515233a14116f9c9f", "text": "PAN 2018 explores several authorship analysis tasks enabling a systematic comparison of competitive approaches and advancing research in digital text forensics. More specifically, this edition of PAN introduces a shared task in cross-domain authorship attribution, where texts of known and unknown authorship belong to distinct domains, and another task in style change detection that distinguishes between single-author and multi-author texts. In addition, a shared task in multimodal author profiling examines, for the first time, a combination of information from both texts and images posted by social media users to estimate their gender. Finally, the author obfuscation task studies how a text by a certain author can be paraphrased so that existing author identification tools are confused and cannot recognize the similarity with other texts of the same author. New corpora have been built to support these shared tasks. A relatively large number of software submissions (41 in total) was received and evaluated. Best paradigms are highlighted while baselines indicate the pros and cons of submitted approaches.", "title": "" }, { "docid": "ce8de212a3ef98f8e8bd391e731108af", "text": "Direct democracy is often proposed as a possible solution to the 21st-century problems of democracy. However, this suggestion clashes with the size and complexity of 21st-century societies, entailing an excessive cognitive burden on voters, who would have to submit informed opinions on an excessive number of issues. In this paper I argue for the development of “voting avatars”, autonomous agents debating and voting on behalf of each citizen. Theoretical research from artificial intelligence, and in particular multiagent systems and computational social choice, proposes 21st-century techniques for this purpose, from the compact representation of a voter’s preferences and values, to the development of voting procedures for autonomous agents use only.", "title": "" }, { "docid": "dc297b1e32fdc4597d1ec9f1d56aa743", "text": "Although joint inference is an effective approach to avoid cascading of errors when inferring multiple natural language tasks, its application to information extraction has been limited to modeling only two tasks at a time, leading to modest improvements. In this paper, we focus on the three crucial tasks of automated extraction pipelines: entity tagging, relation extraction, and coreference. We propose a single, joint graphical model that represents the various dependencies between the tasks, allowing flow of uncertainty across task boundaries. Since the resulting model has a high tree-width and contains a large number of variables, we present a novel extension to belief propagation that sparsifies the domains of variables during inference. Experimental results show that our joint model consistently improves results on all three tasks as we represent more dependencies. In particular, our joint model obtains 12% error reduction on tagging over the isolated models.", "title": "" }, { "docid": "8a31704d12d042618dd9e69f0aebd813", "text": "a r t i c l e i n f o Keywords: Antisocial personality disorder Psychopathy Amygdala Orbitofrontal cortex Monoamine oxidase SNAP proteins Psychopathy is perhaps one of the most misused terms in the American public, which is in no small part due to our obsession with those who have no conscience, and our boldness to try and profile others with this disorder. Here, I present how psychopathy is seen today, before discussing the classification of psychopathy. I also explore the neurological differences in the brains of those with psychopathy, before finally taking a look at genetic risk factors. I conclude by raising some questions about potential treatment.", "title": "" } ]
scidocsrr
98ba20f17cb683c111620aaf3744458b
DEVELOPMENT OF A BROADBAND HORIZONTALLY POLARIZED OMNIDIRECTIONAL PLANAR ANTENNA AND ITS ARRAY FOR BASE STATIONS
[ { "docid": "8be94cf3744cf18e29c4f41b727cc08a", "text": "A printed dipole with an integrated balun features a broad operating bandwidth. The feed point of conventional balun structures is fixed at the top of the integrated balun, which makes it difficult to match to a 50-Omega feed. In this communication, we demonstrate that it is possible to directly match with the 50-Omega feed by adjusting the position of the feed point of the integrated balun. The printed dipole with the hereby presented adjustable integrated balun maintains the broadband performance and exhibits flexibility for the matching to different impedance values, which is extremely important for the design of antenna arrays since the mutual coupling between antenna elements commonly changes the input impedance of each single element. An equivalent-circuit analysis is presented for the understanding of the mechanism of the impedance match. An eight-element linear antenna array is designed as a benchmarking topology for broadband wireless base stations.", "title": "" }, { "docid": "e90e2a651c54b8510efe00eb1d8e7be0", "text": "The design simulation, fabrication, and measurement of a 2.4-GHz horizontally polarized omnidirectional planar printed antenna for WLAN applications is presented. The antenna adopts the printed Alford-loop-type structure. The three-dimensional (3-D) EM simulator HFSS is used for design simulation. The designed antenna is fabricated on an FR-4 printed-circuit-board substrate. The measured input standing-wave-ratio (SWR) is less than three from 2.40 to 2.483 GHz. As desired, the horizontal-polarization H-plane pattern is quite omnidirectional and the E-plane pattern is also very close to that of an ideal dipole antenna. Also a comparison with the popular printed inverted-F antenna (PIFA) has been conducted, the measured H-plane pattern of the Alford-loop-structure antenna is better than that of the PIFA when the omnidirectional pattern is desired. Further more, the study of the antenna printed on a simulated PCMCIA card and that inserted inside a laptop PC are also conducted. The HFSS model of a laptop PC housing, consisting of the display, the screen, and the metallic box with the keyboard, is constructed. The effect of the laptop PC housing with different angle between the display and keyboard on the antenna is also investigated. It is found that there is about 15 dB attenuation of the gain pattern (horizontal-polarization field) in the opposite direction of the PCMCIA slot on the laptop PC. Hence, the effect of the large ground plane of the PCMCIA card and the attenuation effect of the laptop PC housing should be taken into consideration for the antenna design for WLAN applications. For the proposed antenna, in addition to be used alone for a horizontally polarized antenna, it can be also a part of a diversity antenna", "title": "" } ]
[ { "docid": "6992762ad22f9e33db6ded9430e06848", "text": "Solution M and C are strictly dominated and hence cannot receive positive probability in any Nash equilibrium. Given that only L and R receive positive probability, T cannot receive positive probability either. So, in any Nash equilibrium player 1 must play B with probability one. Given that, any probability distribution over L and R is a best response for player 2. In other words, the set of Nash equilibria is given by", "title": "" }, { "docid": "1fd83e5db732a1169aef1e1aae71fe54", "text": "In the present paper, we analyze the past, present and future of medicinal plants, both as potential antimicrobial crude drugs as well as a source for natural compounds that act as new anti-infection agents. In the past few decades, the search for new anti-infection agents has occupied many research groups in the field of ethnopharmacology. When we reviewed the number of articles published on the antimicrobial activity of medicinal plants in PubMed during the period between 1966 and 1994, we found 115; however, in the following decade between 1995 and 2004, this number more than doubled to 307. In the studies themselves one finds a wide range of criteria. Many focus on determining the antimicrobial activity of plant extracts found in folk medicine, essential oils or isolated compounds such as alkaloids, flavonoids, sesquiterpene lactones, diterpenes, triterpenes or naphtoquinones, among others. Some of these compounds were isolated or obtained by bio-guided isolation after previously detecting antimicrobial activity on the part of the plant. A second block of studies focuses on the natural flora of a specific region or country; the third relevant group of papers is made up of specific studies of the activity of a plant or principle against a concrete pathological microorganism. Some general considerations must be established for the study of the antimicrobial activity of plant extracts, essential oils and the compounds isolated from them. Of utmost relevance is the definition of common parameters, such as plant material, techniques employed, growth medium and microorganisms tested.", "title": "" }, { "docid": "1b2d7b2895ae4b996797ea64ddbae14e", "text": "For the past decade, query processing on relational data has been studied extensively, and many theoretical and practical solutions to query processing have been proposed under various scenarios. With the recent popularity of cloud computing, users now have the opportunity to outsource their data as well as the data management tasks to the cloud. However, due to the rise of various privacy issues, sensitive data (e.g., medical records) need to be encrypted before outsourcing to the cloud. In addition, query processing tasks should be handled by the cloud; otherwise, there would be no point to outsource the data at the first place. To process queries over encrypted data without the cloud ever decrypting the data is a very challenging task. In this paper, we focus on solving the k-nearest neighbor (kNN) query problem over encrypted database outsourced to a cloud: a user issues an encrypted query record to the cloud, and the cloud returns the k closest records to the user. We first present a basic scheme and demonstrate that such a naive solution is not secure. To provide better security, we propose a secure kNN protocol that protects the confidentiality of the data, user's input query, and data access patterns. Also, we empirically analyze the efficiency of our protocols through various experiments. These results indicate that our secure protocol is very efficient on the user end, and this lightweight scheme allows a user to use any mobile device to perform the kNN query.", "title": "" }, { "docid": "139915d2aaf3698093b73ca81ebd7ad8", "text": "When caring for patients, it is essential that nurses are using the current best practice. To determine what this is, nurses must be able to read research critically. But for many qualified and student nurses, the terminology used in research can be difficult to understand, thus making critical reading even more daunting. It is imperative in nursing that care has its foundations in sound research, and it is essential that all nurses have the ability to critically appraise research to identify what is best practice. This article is a step-by-step approach to critiquing quantitative research to help nurses demystify the process and decode the terminology.", "title": "" }, { "docid": "5df6adf6047556842e93aa3f83578554", "text": "Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in `saccade and fixate' regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap. First, we complement existing state-of-the art large scale dynamic computer vision annotated datasets like Hollywood-2 [1] and UCF Sports [2] with human eye movements collected under the ecological constraints of visual action and scene context recognition tasks. To our knowledge these are the first large human eye tracking datasets to be collected and made publicly available for video, vision.imar.ro/eyetracking (497,107 frames, each viewed by 19 subjects), unique in terms of their (a) large scale and computer vision relevance, (b) dynamic, video stimuli, (c) task control, as well as free-viewing. Second, we introduce novel dynamic consistency and alignment measures, which underline the remarkable stability of patterns of visual search among subjects. Third, we leverage the significant amount of collected data in order to pursue studies and build automatic, end-to-end trainable computer vision systems based on human eye movements. Our studies not only shed light on the differences between computer vision spatio-temporal interest point image sampling strategies and the human fixations, as well as their impact for visual recognition performance, but also demonstrate that human fixations can be accurately predicted, and when used in an end-to-end automatic system, leveraging some of the advanced computer vision practice, can lead to state of the art results.", "title": "" }, { "docid": "1ff61150d7c8359d3dead84612093754", "text": "In this work, a novel learning-based approach has been developed to generate driving paths by integrating LIDAR point clouds, GPS-IMU information, and Google driving directions. The system is based on a fully convolutional neural network that jointly learns to carry out perception and path generation from real-world driving sequences and that is trained using automatically generated training examples. Several combinations of input data were tested in order to assess the performance gain provided by specific information modalities. The fully convolutional neural network trained using all the available sensors together with driving directions achieved the best MaxF score of 88.13% when considering a region of interest of 60×60 meters. By considering a smaller region of interest, the agreement between predicted paths and ground-truth increased to 92.60%. The positive results obtained in this work indicate that the proposed system may help fill the gap between low-level scene parsing and behavior-reflex approaches by generating outputs that are close to vehicle control and at the same time human-interpretable.", "title": "" }, { "docid": "5f9b06461aa5f2cd941323f6a50e6ab5", "text": "We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. Our parser is an encoder-decoder neural network with two key technical innovations: (1) a grammar for the decoder that only generates well-typed logical forms; and (2) an entity embedding and linking module that identifies entity mentions while generalizing across tables. We also introduce a novel method for training our neural model with question-answer supervision. On the WIKITABLEQUESTIONS data set, our parser achieves a state-of-theart accuracy of 43.3% for a single model and 45.9% for a 5-model ensemble, improving on the best prior score of 38.7% set by a 15-model ensemble. These results suggest that type constraints and entity linking are valuable components to incorporate in neural semantic parsers.", "title": "" }, { "docid": "d2f36cc750703f5bbec2ea3ef4542902", "text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …", "title": "" }, { "docid": "61ed9242764dad47daf7b7fc47865c88", "text": "Haar-Cascade classifier method has been applied to detect the presence of a human on the thermal image. The evaluation was done on the performance of detection, represented by its precision and recall values. The thermal camera images were varied to obtain comprehensive results, which covered the distance of the object from the camera, the angle of the camera to the object, the number of objects, and the environmental conditions during image acquisition. The results showed that the greater the camera-object distance, the precision and recall of human detection results declined. Human objects would also be hard to detect if his/her pose was not facing frontally. The method was able to detect more than one human in the image with positions of in front of each other, side by side, or overlapped to one another. However, if there was any other object in the image that had characteristics similar to a human, the object would also be detected as a human being, resulting in a false detection. These other objects could be an infrared shadow formed from the reflection on glass or painted walls.", "title": "" }, { "docid": "1eb4805e6874ea1882a995d0f1861b80", "text": "The Asian-Pacific Association for the Study of the Liver (APASL) convened an international working party on the \"APASL consensus statements and recommendation on management of hepatitis C\" in March, 2015, in order to revise \"APASL consensus statements and management algorithms for hepatitis C virus infection (Hepatol Int 6:409-435, 2012)\". The working party consisted of expert hepatologists from the Asian-Pacific region gathered at Istanbul Congress Center, Istanbul, Turkey on 13 March 2015. New data were presented, discussed and debated to draft a revision. Participants of the consensus meeting assessed the quality of cited studies. Finalized recommendations on treatment of hepatitis C are presented in this review.", "title": "" }, { "docid": "06b99205e1dc53e5120a22dc4f927aa0", "text": "The last 2 decades witnessed a surge in empirical studies on the variables associated with achievement in higher education. A number of meta-analyses synthesized these findings. In our systematic literature review, we included 38 meta-analyses investigating 105 correlates of achievement, based on 3,330 effect sizes from almost 2 million students. We provide a list of the 105 variables, ordered by the effect size, and summary statistics for central research topics. The results highlight the close relation between social interaction in courses and achievement. Achievement is also strongly associated with the stimulation of meaningful learning by presenting information in a clear way, relating it to the students, and using conceptually demanding learning tasks. Instruction and communication technology has comparably weak effect sizes, which did not increase over time. Strong moderator effects are found for almost all instructional methods, indicating that how a method is implemented in detail strongly affects achievement. Teachers with high-achieving students invest time and effort in designing the microstructure of their courses, establish clear learning goals, and employ feedback practices. This emphasizes the importance of teacher training in higher education. Students with high achievement are characterized by high self-efficacy, high prior achievement and intelligence, conscientiousness, and the goal-directed use of learning strategies. Barring the paucity of controlled experiments and the lack of meta-analyses on recent educational innovations, the variables associated with achievement in higher education are generally well investigated and well understood. By using these findings, teachers, university administrators, and policymakers can increase the effectivity of higher education. (PsycINFO Database Record", "title": "" }, { "docid": "46a55d7a3349f7228acb226ed7875dc9", "text": "Previous research on driver drowsiness detection has focused primarily on lane deviation metrics and high levels of fatigue. The present research sought to develop a method for detecting driver drowsiness at more moderate levels of fatigue, well before accident risk is imminent. Eighty-seven different driver drowsiness detection metrics proposed in the literature were evaluated in two simulated shift work studies with high-fidelity simulator driving in a controlled laboratory environment. Twenty-nine participants were subjected to a night shift condition, which resulted in moderate levels of fatigue; 12 participants were in a day shift condition, which served as control. Ten simulated work days in the study design each included four 30-min driving sessions, during which participants drove a standardized scenario of rural highways. Ten straight and uneventful road segments in each driving session were designated to extract the 87 different driving metrics being evaluated. The dimensionality of the overall data set across all participants, all driving sessions and all road segments was reduced with principal component analysis, which revealed that there were two dominant dimensions: measures of steering wheel variability and measures of lateral lane position variability. The latter correlated most with an independent measure of fatigue, namely performance on a psychomotor vigilance test administered prior to each drive. We replicated our findings across eight curved road segments used for validation in each driving session. Furthermore, we showed that lateral lane position variability could be derived from measured changes in steering wheel angle through a transfer function, reflecting how steering wheel movements change vehicle heading in accordance with the forces acting on the vehicle and the road. This is important given that traditional video-based lane tracking technology is prone to data loss when lane markers are missing, when weather conditions are bad, or in darkness. Our research findings indicated that steering wheel variability provides a basis for developing a cost-effective and easy-to-install alternative technology for in-vehicle driver drowsiness detection at moderate levels of fatigue.", "title": "" }, { "docid": "45c3b5c25c738426d903c65767fdd86d", "text": "With the rapid development of Internet and big explosion of text data, it has been a very significant research subject to extract valuable information from text ocean. To realize multi-classification for text sentiment, this paper promotes a RNN language model based on Long Short Term Memory (LSTM), which can get complete sequence information effectively. Compared with the traditional RNN language model, LSTM is better in analyzing emotion of long sentences. And as a language model, LSTM is applied to achieve multi-classification for text emotional attributes. So though training different emotion models, we can know which emotion the sentence belongs to by using these emotion models. And numerical experiments show that it can produce better accuracy rate and recall rate than the conventional RNN.", "title": "" }, { "docid": "a3333555cb07907594822d209098d5e4", "text": "In this paper, we provide a logical formalization of the emotion triggering process and of its relationship with mental attitudes, as described in Ortony, Clore, and Collins’s theory. We argue that modal logics are particularly adapted to represent agents’ mental attitudes and to reason about them, and use a specific modal logic that we call Logic of Emotions in order to provide logical definitions of all but two of their 22 emotions. While these definitions may be subject to debate, we show that they allow to reason about emotions and to draw interesting conclusions from the theory.", "title": "" }, { "docid": "fe79ee9979ed13aa7d1625989adef9f9", "text": "In this paper we propose and carefully evaluate a sequence labeling framework which solely utilizes sparse indicator features derived from dense distributed word representations. The proposed model obtains (near) state-of-the art performance for both part-of-speech tagging and named entity recognition for a variety of languages. Our model relies only on a few thousand sparse coding-derived features, without applying any modification of the word representations employed for the different tasks. The proposed model has favorable generalization properties as it retains over 89.8% of its average POS tagging accuracy when trained at 1.2% of the total available training data, i.e. 150 sentences per language.", "title": "" }, { "docid": "3a5be5b365cfdc6f29646bf97953fc18", "text": "Fuzzy set methods have been used to model and manage uncertainty in various aspects of image processing, pattern recognition, and computer vision. High-level computer vision applications hold a great potential for fuzzy set theory because of its links to natural language. Linguistic scene description, a language-based interpretation of regions and their relationships, is one such application that is starting to bear the fruits of fuzzy set theoretic involvement. In this paper, we are expanding on two earlier endeavors. We introduce new families of fuzzy directional relations that rely on the computation of histograms of forces. These families preserve important relative position properties. They provide inputs to a fuzzy rule base that produces logical linguistic descriptions along with assessments as to the validity of the descriptions. Each linguistic output uses hedges from a dictionary of about 30 adverbs and other terms that can be tailored to individual users. Excellent results from several synthetic and real image examples show the applicability of this approach.", "title": "" }, { "docid": "ba901f44b42820202d0e81671e7f189e", "text": "In this paper, we present a novel method for visual loop-closure detection in autonomous robot navigation. Our method, which we refer to as bag-of-raw-features or BoRF, uses scale-invariant visual features (such as SIFT) directly, rather than their vector-quantized representation or bag-of-words (BoW), which is popular in recent studies of the problem. BoRF avoids the offline process of vocabulary construction, and does not suffer from the perceptual aliasing problem of BoW, thereby significantly improving the recall performance. To reduce the computational cost of direct feature matching, we exploit the fact that images in the case of robot navigation are acquired sequentially, and that feature matching repeatability with respect to scale can be learned and used to reduce the number of the features considered for matching. The proposed method is tested experimentally using indoor visual SLAM image sequences.", "title": "" }, { "docid": "406fbdfff4f7abb505c0e238e08decca", "text": "A computationally efficient method for detecting a chorus section in popular and rock music is presented. The method utilizes a distance matrix representation that is obtained by summing two separate distance matrices calculated using the mel-frequency cepstral coefficient and pitch chroma features. The benefit of computing two separate distance matrices is that different enhancement operations can be applied on each. An enhancement operation is found beneficial only for the chroma distance matrix. This is followed by detection of the off-diagonal segments of small distance from the distance matrix. From the detected segments, an initial chorus section is selected using a scoring mechanism utilizing several heuristics, and subjected to further processing. This further processing involves using image processing filters in a neighborhood of the distance matrix surrounding the initial chorus section. The final position and length of the chorus is selected based on the filtering results. On a database of 206 popular & rock music pieces an average F-measure of 86% is obtained. It takes about ten seconds to process a song with an average duration of three to four minutes on a Windows XP computer with a 2.8 GHz Intel Xeon processor.", "title": "" }, { "docid": "28fbb71fab5ea16ef52611b31fcf1dfa", "text": "Gamification, an emerging idea for using game design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, few research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and, based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users; that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and, at the same time, advance existing theories.", "title": "" }, { "docid": "fc77cdf4712d15d21a787602fca94470", "text": "In this paper we present a Quantified SWOT (Strengths, Weaknesses, Opportunities and Threats) analytical method which provides more detailed and quantified data for SWOT analysis. The Quantified SWOT analytical method adopts the concept of Multiple-Attribute Decision Making (MADM), which uses a multi-layer scheme to simplify complicated problems, and thus is able to perform SWOT analysis on several enterprises simultaneously. Container ports in East Asia are taken as a case study in this paper. Quantified SWOT analysis is used to assess the competing strength of each port and then suggest an adoptable competing strategy for each. c © 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
6ef549738dfee2dac2300c90b4814087
New potential functions for mobile robot path planning
[ { "docid": "72bd06fddd10b159d9892a142bb74cba", "text": "A new real-time obstacle avoidance approach for mobile robots has been developed and implemented. This approach permits the detection of unknown obstacles simultaneously with the steering of the mobile robot to avoid collisions and advancing toward the target. The novelty of this approach, entitled the Virtual Force Field, lies in the integration of two known concepts: Certainty Grids for obstacle representation, and Potential Fields for navigation. This combination is especially suitable for the accommodation of inaccurate sensor data (such as produced by ultrasonic sensors) as well as for sensor fusion, and enables continuous motion of the robot without stopping in front of obstacles. This navigation algorithm also takes into account the dynamic behavior of a fast mobile robot and solves the \"local minimum trap\" problem. Experimental results from a mobile robot running at a maximum speed of 0.78 m/sec demonstrate the power of the proposed algorithm.", "title": "" }, { "docid": "5728682e998b89cb23b12ba9acc3d993", "text": "Potential field methods are rapidly gaining popularity in obstacle avoidance applications for mobile robots and manipulators. While the potential field principle is particularly attractive because of its elegance and simplicity, substantial shortcomings have been identified as problems that are inherent to this principle. Based upon mathematical analysis, this paper presents a systematic criticism of the inherent problems. The heart of this analysis is a differential equation that combines the robot and the environment into a unified system. The identified problems are discussed in qualitative and theoretical terms and documented with experimental results from actual mobile robot runs.", "title": "" } ]
[ { "docid": "7777b01fe7df8763fb3541f075f7b4d8", "text": "The target of the present review is to draw attention to many critically important unsolved problems in the future development of medicinal mushroom science in the twenty-first century. Special attention is paid to mushroom polysaccharides. Many, if not all, higher Basidiomycetes mushrooms contain biologically active polysaccharides in fruit bodies, cultured mycelium, and cultured broth. The data on mushroom polysaccharides are summarized for approximately 700 species of higher Hetero- and Homobasidiomycetes. The chemical structure of polysaccharides and its connection to antitumor activity, including possible ways of chemical modification, experimental testing and clinical use of antitumor or immunostimulating polysaccharides, and possible mechanisms of their biological action, are discussed. Numerous bioactive polysaccharides or polysaccharide–protein complexes from medicinal mushrooms are described that appear to enhance innate and cell-mediated immune responses and exhibit antitumor activities in animals and humans. Stimulation of host immune defense systems by bioactive polymers from medicinal mushrooms has significant effects on the maturation, differentiation, and proliferation of many kinds of immune cells in the host. Many of these mushroom polymers were reported previously to have immunotherapeutic properties by facilitating growth inhibition and destruction of tumor cells. While the mechanism of their antitumor actions is still not completely understood, stimulation and modulation of key host immune responses by these mushroom polymers appears central. Particularly and most importantly for modern medicine are polysaccharides with antitumor and immunostimulating properties. Several of the mushroom polysaccharide compounds have proceeded through phases I, II, and III clinical trials and are used extensively and successfully in Asia to treat various cancers and other diseases. A total of 126 medicinal functions are thought to be produced by medicinal mushrooms and fungi including antitumor, immunomodulating, antioxidant, radical scavenging, cardiovascular, antihypercholesterolemia, antiviral, antibacterial, antiparasitic, antifungal, detoxification, hepatoprotective, and antidiabetic effects.", "title": "" }, { "docid": "e1e836fe6ff690f9c85443d26a1448e3", "text": "■ We describe an apparatus and methodology to support real-time color imaging for night operations. Registered imagery obtained in the visible through nearinfrared band is combined with thermal infrared imagery by using principles of biological opponent-color vision. Visible imagery is obtained with a Gen III image intensifier tube fiber-optically coupled to a conventional charge-coupled device (CCD), and thermal infrared imagery is obtained by using an uncooled thermal imaging array. The two fields of view are matched and imaged through a dichroic beam splitter to produce realistic color renderings of a variety of night scenes. We also demonstrate grayscale and color fusion of intensified-CCD/FLIR imagery. Progress in the development of a low-light-sensitive visible CCD imager with high resolution and wide intrascene dynamic range, operating at thirty frames per second, is described. Example low-light CCD imagery obtained under controlled illumination conditions, from full moon down to overcast starlight, processed by our adaptive dynamic-range algorithm, is shown. The combination of a low-light visible CCD imager and a thermal infrared microbolometer array in a single dualband imager, with a portable image-processing computer implementing our neuralnet algorithms, and color liquid-crystal display, yields a compact integrated version of our system as a solid-state color night-vision device. The systems described here can be applied to a large variety of military operations and civilian needs.", "title": "" }, { "docid": "9cdc7b6b382ce24362274b75da727183", "text": "Collaborative spectrum sensing is subject to the attack of malicious secondary user(s), which may send false reports. Therefore, it is necessary to detect potential attacker(s) and then exclude the attacker's report for spectrum sensing. Many existing attacker-detection schemes are based on the knowledge of the attacker's strategy and thus apply the Bayesian attacker detection. However, in practical cognitive radio systems the data fusion center typically does not know the attacker's strategy. To alleviate the problem of the unknown strategy of attacker(s), an abnormality-detection approach, based on the abnormality detection in data mining, is proposed. The performance of the attacker detection in the single-attacker scenario is analyzed explicitly. For the case in which the attacker does not know the reports of honest secondary users (called independent attack), it is shown that the attacker can always be detected as the number of spectrum sensing rounds tends to infinity. For the case in which the attacker knows all the reports of other secondary users, based on which the attacker sends its report (called dependent attack), an approach for the attacker to perfectly avoid being detected is found, provided that the attacker has perfect information about the miss-detection and false-alarm probabilities. This motivates cognitive radio networks to protect the reports of secondary users. The performance of attacker detection in the general case of multiple attackers is demonstrated using numerical simulations.", "title": "" }, { "docid": "c4fefd19a1a7e93a98312e33bcdc4774", "text": "Data imbalance is a common problem both in single-label classification (SLC) and multi-label classification (MLC). There is no doubt that the predicting result suffers from this problem. Although, a broad range of studies associate with imbalance problem, most of them focus on SLC and for MLC is relatively less. Actually, this problem arising in MLCis more frequent and complex than in SLC. In this paper, we proceed from dealing with imbalance problem for MLC and propose a new approach called DEML. DEML transforms the whole label set of multi-label dataset into some subsets and each subset is treated as a multi-class dataset with balanced class distribution, which not only addressing imbalance problem but also preserving dataset integrity and consistency. Extensive experiments show that DEML possesses highly competitive performance both in computation and effectiveness.", "title": "" }, { "docid": "06c839f10b3d561c3a327bb67aa8ec10", "text": "A great deal of research exists on the neural basis of theory-of-mind (ToM) or mentalizing. Qualitative reviews on this topic have identified a mentalizing network composed of the medial prefrontal cortex, posterior cingulate/precuneus, and bilateral temporal parietal junction. These conclusions, however, are not based on a quantitative and systematic approach. The current review presents a quantitative meta-analysis of neuroimaging studies pertaining to ToM, using the activation-likelihood estimation (ALE) approach. Separate ALE meta-analyses are presented for story-based and nonstory-based studies of ToM. The conjunction of these two meta-analyses reveals a core mentalizing network that includes areas not typically noted by previous reviews. A third ALE meta-analysis was conducted with respect to story comprehension in order to examine the relation between ToM and stories. Story processing overlapped with many regions of the core mentalizing network, and these shared regions bear some resemblance to a network implicated by a number of other processes.", "title": "" }, { "docid": "63663dbc320556f7de09b5060f3815a6", "text": "There has been a long history of applying AI technologies to address software engineering problems especially on tool automation. On the other hand, given the increasing importance and popularity of AI software, recent research efforts have been on exploring software engineering solutions to improve the productivity of developing AI software and the dependability of AI software. The emerging field of intelligent software engineering is to focus on two aspects: (1) instilling intelligence in solutions for software engineering problems; (2) providing software engineering solutions for intelligent software. This extended abstract shares perspectives on these two aspects of intelligent software engineering.", "title": "" }, { "docid": "17752f2b561d81643b35b6d2d10e4e46", "text": "This randomised controlled trial was undertaken to evaluate the effectiveness of acupuncture as a treatment for frozen shoulder. Thirty-five patients with a diagnosis of frozen shoulder were randomly allocated to an exercise group or an exercise plus acupuncture group and treated for a period of 6 weeks. Functional mobility, power, and pain were assessed by a blinded assessor using the Constant Shoulder Assessment, at baseline, 6 weeks and 20 weeks. Analysis was based on the intention-to-treat principle. Compared with the exercise group, the exercise plus acupuncture group experienced significantly greater improvement with treatment. Improvements in scores by 39.8% (standard deviation, 27.1) and 76.4% (55.0) were seen for the exercise and the exercise plus acupuncture groups, respectively at 6 weeks (P=0.048), and were sustained at the 20-week re-assessment (40.3% [26.7] and 77.2% [54.0], respectively; P=0.025). We conclude that the combination of acupuncture with shoulder exercise may offer effective treatment for frozen shoulder.", "title": "" }, { "docid": "ec4638bad4caf17de83ac3557254c4bf", "text": "Explaining policies of Markov Decision Processes (MDPs) is complicated due to their probabilistic and sequential nature. We present a technique to explain policies for factored MDP by populating a set of domain-independent templates. We also present a mechanism to determine a minimal set of templates that, viewed together, completely justify the policy. Our explanations can be generated automatically at run-time with no additional effort required from the MDP designer. We demonstrate our technique using the problems of advising undergraduate students in their course selection and assisting people with dementia in completing the task of handwashing. We also evaluate our explanations for courseadvising through a user study involving students.", "title": "" }, { "docid": "dd48abf39ab52758719d5be06dc8e733", "text": "A new algorithm for Boolean operations on general planar polygons is presented. It is available for general planar polygons (manifold or non-manifold, with or without holes). Edges of the two general polygons are subdivided at the intersection points and touching points. Thus, the boundary of the Boolean operation resultant polygon is made of some whole edges of the polygons after the subdivision process. We use the simplex theory to build the basic mathematical model of the new algorithm. The subordination problem between an edge and a polygon is reduced to a problem of determining whether a point is on some edges of some simplices or inside the simplices, and the associated simplicial chain of the resultant polygon is just an assembly of some simplices and their coefficients of the two polygons after the subdivision process. Examples show that the running time required by the new algorithm is less than one-third of that by the Rivero and Feito algorithm. r 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d44daf0c7f045ef388d8b435a705e0b2", "text": "Mapping the relationship between gene expression and psychopathology is proving to be among the most promising new frontiers for advancing the understanding, treatment, and prevention of mental disorders. Each cell in the human body contains some 23,688 genes, yet only a tiny fraction of a cell’s genes are active or “expressed” at any given moment. The interactions of biochemical, psychological, and environmental factors influencing gene expression are complex, yet relatively accessible technologies for assessing gene expression have allowed the identification of specific genes implicated in a range of psychiatric disorders, including depression, anxiety, and schizophrenia. Moreover, successful psychotherapeutic interventions have been shown to shift patterns of gene expression. Five areas of biological change in successful psychotherapy that are dependent upon precise shifts in gene expression are identified in this paper. Psychotherapy ameliorates (a) exaggerated limbic system responses to innocuous stimuli, (b) distortions in learning and memory, (c) imbalances between sympathetic and parasympathetic nervous system activity, (d) elevated levels of cortisol and other stress hormones, and (e) impaired immune functioning. The thesis of this paper is that psychotherapies which utilize non-invasive somatic interventions may yield greater precision and power in bringing about therapeutically beneficial shifts in gene expression that control these biological markers. The paper examines the manual stimulation of acupuncture points during psychological exposure as an example of such a somatic intervention. For each of the five areas, a testable proposition is presented to encourage research that compares acupoint protocols with conventional therapies in catalyzing advantageous shifts in gene expression.", "title": "" }, { "docid": "198d352bf0c044ceccddaeb630b3f9c7", "text": "In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low-power write operations for the learning and implement short-term association using temporal coding and spike-timing-dependent plasticity–based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime.", "title": "" }, { "docid": "846905c567dfddfab0b8c4ee60cc283b", "text": "Social media sentiment analysis (also known as opinion mining) which aims to extract people’s opinions, attitudes and emotions from social networks has become a research hotspot. Conventional sentiment analysis concentrates primarily on the textual content. However, multimedia sentiment analysis has begun to receive attention since visual content such as images and videos is becoming a new medium for self-expression in social networks. In order to provide a reference for the researchers in this active area, we give an overview of this topic and describe the algorithms of sentiment analysis and opinion mining for social multimedia. Having conducted a brief review on textual sentiment analysis for social media, we present a comprehensive survey of visual sentiment analysis on the basis of a thorough investigation of the existing literature. We further give a summary of existing studies on multimodal sentiment analysis which combines multiple media channels. We finally summarize the existing benchmark datasets in this area, and discuss the future research trends and potential directions for multimedia sentiment analysis. This survey covers 100 articles during 2008–2018 and categorizes existing studies according to the approaches they adopt.", "title": "" }, { "docid": "4097fe8240f8399de8c0f7f6bdcbc72f", "text": "Feature extraction of EEG signals is core issues on EEG based brain mapping analysis. The classification of EEG signals has been performed using features extracted from EEG signals. Many features have proved to be unique enough to use in all brain related medical application. EEG signals can be classified using a set of features like Autoregression, Energy Spectrum Density, Energy Entropy, and Linear Complexity. However, different features show different discriminative power for different subjects or different trials. In this research, two-features are used to improve the performance of EEG signals. Neural Network based techniques are applied to feature extraction of EEG signal. This paper discuss on extracting features based on Average method and Max & Min method of the data set. The Extracted Features are classified using Neural Network Temporal Pattern Recognition Technique. The two methods are compared and performance is analyzed based on the results obtained from the Neural Network classifier.", "title": "" }, { "docid": "254a2379c0a0c85024f0b720f4177176", "text": "Microblogging services like Twitter and Facebook collect millions of user generated content every moment about trending news, occurring events, and so on. Nevertheless, it is really a nightmare to find information of interest through the huge amount of available posts that are often noise and redundant. In general, social media analytics services have caught increasing attention from both side research and industry. Specifically, the dynamic context of microblogging requires to manage not only meaning of information but also the evolution of knowledge over the timeline. This work defines Time Aware Knowledge Extraction (briefly TAKE) methodology that relies on temporal extension of Fuzzy Formal Concept Analysis. In particular, a microblog summarization algorithm has been defined filtering the concepts organized by TAKE in a time-dependent hierarchy. The algorithm addresses topic-based summarization on Twitter. Besides considering the timing of the concepts, another distinguish feature of the proposed microblog summarization framework is the possibility to have more or less detailed summary, according to the user’s needs, with good levels of quality and completeness as highlighted in the experimental results.", "title": "" }, { "docid": "3760a54a5c5c6675ec2db84035aaef76", "text": "Self-learning hardware systems, with high-degree of plasticity, are critical in performing spatio-temporal tasks in next-generation computing systems. To this end, hierarchical temporal memory (HTM) offers time-based online-learning algorithms that store and recall temporal and spatial patterns. In this work, a reconfigurable and scalable HTM architecture is designed with unique pooling realizations. Virtual synapse design is proposed to address the dynamic interconnections occurring in the learning process. The architecture is interweaved with parallel cells and columns that enable high processing speed for the cortical learning algorithm. HTM has two core operations, spatial and temporal pooling. These operations are verified for two different datasets: MNIST and European number plate font. The spatial pooling operation is independently verified for classification with and without the presence of noise. The temporal pooling is verified for simple prediction. The spatial pooler architecture is ported onto an Altera cyclone II fabric and the entire architecture is synthesized for Xilinx Virtex IV. The results show that ≈ 91% classification accuracy is achieved with MNIST database and ≈ 90% accuracy for the European number plate font numbers with the presence of Gaussian and Salt & Pepper noise. For the prediction, first and second order predictions are observed for a 5-number long sequence generated from European number plate font and ≈ 95% accuracy is obtained. Moreover, the proposed hardware architecture offers 3902X speedup over the software realization. These results indicate that the proposed architecture can serve as a core to build the HTM in hardware and eventually as a standalone self-learning hardware system.", "title": "" }, { "docid": "81ec86a4e13c4a7fb7f0352ac08938ab", "text": "Although experimental studies support that men generally respond more to visual sexual stimuli than do women, there is substantial variability in this effect. One potential source of variability is the type of stimuli used that may not be of equal interest to both men and women whose preferences may be dependent upon the activities and situations depicted. The current study investigated whether men and women had preferences for certain types of stimuli. We measured the subjective evaluations and viewing times of 15 men and 30 women (15 using hormonal contraception) to sexually explicit photos. Heterosexual participants viewed 216 pictures that were controlled for the sexual activity depicted, gaze of the female actor, and the proportion of the image that the genital region occupied. Men and women did not differ in their overall interest in the stimuli, indicated by equal subjective ratings and viewing times, although there were preferences for specific types of pictures. Pictures of the opposite sex receiving oral sex were rated as least sexually attractive by all participants and they looked longer at pictures showing the female actor's body. Women rated pictures in which the female actor was looking indirectly at the camera as more attractive, while men did not discriminate by female gaze. Participants did not look as long at close-ups of genitals, and men and women on oral contraceptives rated genital images as less sexually attractive. Together, these data demonstrate sex-specific preferences for specific types of stimuli even when, across stimuli, overall interest was comparable.", "title": "" }, { "docid": "7095bf529a060dd0cd7eeb2910998cf8", "text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable", "title": "" }, { "docid": "92b26cb86ba44eb63e3e9baba2e90acb", "text": "A compound or collision tumor is a rare occurrence in dermatological findings [1]. The coincidence of malignant melanoma (MM) and basal cell carcinoma (BCC) within the same lesion have only been described in few cases in the literature [2–5]. However, until now the pathogenesis of collision tumors existing of MM and BCC remains unclear [2]. To our knowledge it has not been yet established whether there is a concordant genetic background or independent origin as a possible cause for the development of such a compound tumor. We, therefore, present the extremely rare case of a collision tumor of MM and BCC and the results of a genome-wide analysis by single nucleotide polymorphism array (SNP-Array) for detection of identical genomic aberrations.", "title": "" }, { "docid": "537e58ef969cb9b27a35923157b5753a", "text": "We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when is the cut capacity, and more generally total variation, on these graphs a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking the limit is enabled by a new metric which allows to suitably compare functionals defined on different point clouds.", "title": "" }, { "docid": "d8259846c9da256fb5f68537517fe55a", "text": "Several versions of the Daum-Huang (DH) filter have been introduced recently to address the task of discrete-time nonlinear filtering. The filters propagate a particle set over time to track the system state, but, in contrast to conventional particle filters, there is no proposal density or importance sampling involved. Particles are smoothly migrated using a particle flow derived from a log-homotopy relating the prior and the posterior. Impressive performance has been demonstrated for a wide range of systems, but the implemented algorithms rely on an extended/unscented Kalman filter (EKF/UKF) that is executed in parallel. We illustrate through simulation that the performance of the exact flow DH filter can be compromised when the UKF and EKF fail. By introducing simple but important modifications to the exact flow DH filter implementation, the performance can be improved dramatically.", "title": "" } ]
scidocsrr
aa9ac409a8037952f2d477f1ccd7ada3
Inertial Odometry on Handheld Smartphones
[ { "docid": "b100ca202f99e3ee086cd61f01349a30", "text": "This paper is concerned with inertial-sensor-based tracking of the gravitation direction in mobile devices such as smartphones. Although this tracking problem is a classical one, choosing a good state-space for this problem is not entirely trivial. Even though for many other orientation related tasks a quaternion-based representation tends to work well, for gravitation tracking their use is not always advisable. In this paper we present a convenient linear quaternion-free state-space model for gravitation tracking. We also discuss the efficient implementation of the Kalman filter and smoother for the model. Furthermore, we propose an adaption mechanism for the Kalman filter which is able to filter out shot-noises similarly as has been proposed in context of adaptive and robust Kalman filtering. We compare the proposed approach to other approaches using measurement data collected with a smartphone.", "title": "" } ]
[ { "docid": "aca08ddd20ac74311b24ae0e74019e46", "text": "This paper presents a system architecture for load management in smart buildings which enables autonomous demand side load management in the smart grid. Being of a layered structure composed of three main modules for admission control, load balancing, and demand response management, this architecture can encapsulate the system functionality, assure the interoperability between various components, allow the integration of different energy sources, and ease maintenance and upgrading. Hence it is capable of handling autonomous energy consumption management for systems with heterogeneous dynamics in multiple time-scales and allows seamless integration of diverse techniques for online operation control, optimal scheduling, and dynamic pricing. The design of a home energy manager based on this architecture is illustrated and the simulation results with Matlab/Simulink confirm the viability and efficiency of the proposed framework.", "title": "" }, { "docid": "38297fe227780c10979988c648dc7574", "text": "Homomorphic signal processing techniques are used to place information imperceivably into audio data streams by the introduction of synthetic resonances in the form of closely spaced echoes These echoes can be used to place digital identi cation tags directly into an audio signal with minimal objectionable degradation of the original signal", "title": "" }, { "docid": "fa34e68369a138cbaaf9ad085803e504", "text": "This paper proposes an optimal rotor design method of an interior permanent magnet synchronous motor (IPMSM) by using a permanent magnet (PM) shape. An IPMSM is a structure in which PMs are buried in an inner rotor. The torque, torque ripple, and safety factor of IPMSM can vary depending on the position of the inserted PMs. To determine the optimal design variables according to the placement of the inserted PMs, parameter analysis was performed. Therefore, a response surface methodology, which is one of the statistical analysis design methods, was used. Among many other response surface methodologies, Box-Behnken design is the most commonly used. For the purpose of this research, Box-Behnken design was used to find the design parameter that can achieve minimum experimental variables of objective function. This paper determines the insert position of the PM to obtain high-torque, low-torque ripple by using a finite-element-method, and this paper obtains an optimal design by using a mechanical stiffness method in which a safety factor is considered.", "title": "" }, { "docid": "2a5194f83142bbaef832011d08acd780", "text": "This paper proposes a novel data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm regresses a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations, which are integrated twice to estimate positions. We have acquired training data with ground truth motion trajectories across multiple human subjects and multiple phone placements (e.g., in a bag or a hand). The qualitatively and quantitatively evaluations have demonstrated that our simple algorithm outperforms existing heuristic-based approaches and is even comparable to full Visual Inertial navigation to our surprise. As far as we know, this paper is the first to introduce supervised training for inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation. We will publicly share our code and data to facilitate further research.", "title": "" }, { "docid": "c924aada75b7e3ec231d72f26b936330", "text": "To solve the sparsity problem in collaborative filtering, researchers have introduced transfer learning as a viable approach to make use of auxiliary data. Most previous transfer learning works in collaborative filtering have focused on exploiting point-wise ratings such as numerical ratings, stars, or binary ratings of likes/dislike s. However, in many real-world recommender systems, many users may be unwilling or unlikely to rate items with precision. In contrast, practitioners can turn to various non-preference data to estimate a range or rating distribution of a user’s preference on an item. Such a range or rating distribution is called an uncertain rating since it represents a rating spectrum of uncertainty instead of an accurate point-wise score. In this paper, we propose an efficient transfer learning solution for collaborative filtering, known astransfer by integrative factorization(TIF), to leverage such auxiliary uncertain ratings to improve the performance of recommendation. In particular, we integrate auxiliary data of uncertain ratings as additional constraints in the target matrix factorization problem, and learn an expected rating value for each uncertain rating automatically. The advantages of our proposed approach include the efficiency and the improved effectiveness of collaborative filtering, showing that incorporating the auxiliary data of uncertain ratings can really bring a benefit. Experimental results on two movie recommendation tasks show that our TIF algorithm performs significantly better over a state-of-the-art non-transfer learning method.", "title": "" }, { "docid": "7259530c42f4ba91155284ce909d25a6", "text": "We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log2 1/p (note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the quantity of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of λ bits, metric entropy gets reduced by a factor 2 in quality and λ in quantity. Our formulation allow us to formulate a “chain rule” for leakage on computational entropy. We show that conditioning on λ bits of leakage reduces conditional metric entropy by λ bits. This is the same loss as leaking from unconditional metric entropy. This result makes it easy to measure entropy even after several rounds of information leakage.", "title": "" }, { "docid": "45ea8e1e27f6c687d957af561aca5188", "text": "Impedance matching networks for nonlinear devices such as amplifiers and rectifiers are normally very challenging to design, particularly for broadband and multiband devices. A novel design concept for a broadband high-efficiency rectenna without using matching networks is presented in this paper for the first time. An off-center-fed dipole antenna with relatively high input impedance over a wide frequency band is proposed. The antenna impedance can be tuned to the desired value and directly provides a complex conjugate match to the impedance of a rectifier. The received RF power by the antenna can be delivered to the rectifier efficiently without using impedance matching networks; thus, the proposed rectenna is of a simple structure, low cost, and compact size. In addition, the rectenna can work well under different operating conditions and using different types of rectifying diodes. A rectenna has been designed and made based on this concept. The measured results show that the rectenna is of high power conversion efficiency (more than 60%) in two wide bands, which are 0.9–1.1 and 1.8–2.5 GHz, for mobile, Wi-Fi, and ISM bands. Moreover, by using different diodes, the rectenna can maintain its wide bandwidth and high efficiency over a wide range of input power levels (from 0 to 23 dBm) and load values (from 200 to 2000 Ω). It is, therefore, suitable for high-efficiency wireless power transfer or energy harvesting applications. The proposed rectenna is general and simple in structure without the need for a matching network hence is of great significance for many applications.", "title": "" }, { "docid": "16e2ba731973bfdad051b775078e08be", "text": "I examine the phenomenon of implicit learning, the process by which knowledge about the ralegoverned complexities of the stimulus environment is acquired independently of conscious attempts to do so. Our research with the two, seemingly disparate experimental paradigms of synthetic grammar learning and probability learning is reviewed and integrated with other approaches to the general problem of unconscious cognition. The conclusions reached are as follows: (a) Implicit learning produces a tacit knowledge base that is abstract and representative of the structure of the environment; (b) such knowledge is optimally acquired independently of conscious efforts to learn; and (c) it can be used implicitly to solve problems and make accurate decisions about novel stimulus circumstances. Various epistemological issues and related prob1 lems such as intuition, neuroclinical disorders of learning and memory, and the relationship of evolutionary processes to cognitive science are also discussed.", "title": "" }, { "docid": "87ca3f4c11e4853a4b2a153d5b9f1bfe", "text": "The study of light verbs and complex predicates is frought wi th dangers and misunderstandings that go beyond the merely terminological. This paper attemp s to pic through the terminological, theoretical and empirical jungle in order to arrive at a nove l understanding of the role of light verbs crosslinguistically. In particular, this paper addresses how light verbs and complex predicates can be identified crosslinguistically, what the relationsh ip between the two is, and whether light verbs must always be associated with uniform syntactic and s emantic properties. Finally, the paper proposes a novel view of how light verbs are situated in the le xicon by addressing some historical data and their relationship with preverbs and verb particle s. Jespersen (1965,Volume VI:117) is generally credited with first coining the termlight verb, which he applied to English V+NP constructions as in (1).", "title": "" }, { "docid": "71d130ff599d1f80432bbf797de978e6", "text": "We propose an on-line algorithm for simultaneous localization and mapping of dynamic environments. Our algorithm is capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map. Our approach is based on maintaining two occupancy grids. One grid models the static parts of the environment, and the other models the dynamic parts of the environment. The union of the two provides a complete description of the environment over time. We also maintain a third map containing information about static landmarks detected in the environment. These landmarks provide the robot with localization. Results in simulation and with physical robots show the efficiency of our approach and show how the differentiation of dynamic and static entities in the environment and SLAM can be mutually beneficial.", "title": "" }, { "docid": "53e1c4fc0732efb9b6992a2468425a1a", "text": "This study investigated the effects of gameplaying on fifth-graders’ maths performance and attitudes. One hundred twenty five fifth graders were recruited and assigned to a cooperative Teams-Games-Tournament (TGT), interpersonal competitive or no gameplaying condition. A state standardsbased maths exam and an inventory on attitudes towards maths were used for the pretest and posttest. The students’ gender, socio-economic status and prior maths ability were examined as the moderating variables and covariate. Multivariate analysis of covariance (MANCOVA) indicated that gameplaying was more effective than drills in promoting maths performance, and cooperative gameplaying was most effective for promoting positive maths attitudes regardless of students’ individual differences. Introduction The problem of low achievement in American mathematics education has been discussed in numerous policy reports (Mathematical Sciences Education Board, 2004). Educational researchers (eg Ferrini-Mundy & Schram, 1996) and administrators (eg Brodinsky, 1985), for years, have appealed for mathematics-education reform and proposed various solutions to foster mathematics learning. Amongst these propositions were computer games as powerful mathematical learning tools with great motivational appeal and multiple representations of learning materials (Betz, 1995; Malone, 1981; Moreno, 2002; Quinn, 1994). Researchers reported (eg Ahl, 1981; Bahr & Rieth, 1989; Inkpen, 1994) that a variety of computer games have been used in classrooms to support learning of basic arithmetic and problem-solving skills. Other researchers (Amory, Naicker, Vincent & Adams, 1999; Papert, 1980) contend that computer games need to be carefully aligned with sound learning strategies and 250 British Journal of Educational Technology Vol 38 No 2 2007 © 2006 The Authors. Journal compilation © 2006 British Educational Communications and Technology Agency. conditions to be beneficial. Consistent with this proposition, the incorporation of computer games within a cooperative learning setting becomes an attractive possibility. Cooperative learning in mathematics has been well discussed by Davidson (1990): group learning helps to remove students’ frustration; it is not only a source for additional help but also offers a support network. Empirical research (Jacobs, 1996; Reid, 1992; Whicker, Bol & Nunnery, 1997) verifies the importance of cooperative learning in mathematics education. Hence, the potential benefit of combining computer games with cooperative learning in mathematics warrants a field investigation. Specific research on the cooperative use of computer games is limited. Empirical study of this technique is especially sparse. Therefore, the purpose of this research was to explore whether computer games and cooperative learning could be used together to enrich K-12 mathematics education. Employing a pretest–posttest experimental design, the study examined the effects of cooperative gameplaying on fifth-grade students’ maths performance and maths attitudes when compared to the interpersonal competitive gameplaying and control groups.", "title": "" }, { "docid": "7ce646ab9da89dae86071f37961fbeac", "text": "This paper proposes an approach for data modelling in five dimensions. Apart from three dimensions for geometrical representation and a fourth dimension for time, we identify scale as fifth dimensional characteristic. Considering scale as an extra dimension of geographic information, fully integrated with the other dimensions, is new. Through a formal definition of geographic data in a conceptual 5D continuum, the data can be handled by one integrated approach assuring consistency across scale and time dimensions. Because the approach is new and challenging, we choose to step-wise studying several combinations of the five dimensions, ultimately resulting in the optimal 5D model. We also propose to apply mathematical theories on multidimensional modelling to well established principles of multidimensional modelling in the geo-information domain. The result is a conceptual full partition of the 3Dspace+time+scale space (i.e. no overlaps, no gaps) realised in a 5D data model implemented in a Database", "title": "" }, { "docid": "f79eca0cafc35ed92fd8ffd2e7a4ab60", "text": "We investigate the novel task of online dispute detection and propose a sentiment analysis solution to the problem: we aim to identify the sequence of sentence-level sentiments expressed during a discussion and to use them as features in a classifier that predicts the DISPUTE/NON-DISPUTE label for the discussion as a whole. We evaluate dispute detection approaches on a newly created corpus of Wikipedia Talk page disputes and find that classifiers that rely on our sentiment tagging features outperform those that do not. The best model achieves a very promising F1 score of 0.78 and an accuracy of 0.80.", "title": "" }, { "docid": "e84a03caf97b5a7ee1007c0eab78664d", "text": "We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and higher probabilities to mini-batches with more diverse data. This simultaneously balances the data and leads to stochastic gradients with lower variance. We term this approach Balanced Mini-batch SGD (BM-SGD). We show that regular SGD and stratified sampling emerge as special cases. Furthermore, BM-SGD can be considered a generalization of stratified sampling to cases where no discrete features exist to bin the data into groups. We show experimentally that our method results more interpretable and diverse features in unsupervised setups, and in better classification accuracies in supervised setups.", "title": "" }, { "docid": "80f9f3f12e33807e63ee5ba58916d41c", "text": "Positivist and interpretivist researchers have different views on how their research outcomes may be evaluated. The issues of validity, reliability and generalisability, used in evaluating positivist studies, are regarded of relatively little significance by many qualitative researchers for judging the merits of their interpretive investigations. In confirming the research, those three canons need at least to be re-conceptualised in order to reflect the keys issues of concern for interpretivists. Some interpretivists address alternative issues such as credibility, dependability and transferability when determining the trustworthiness of their qualitative investigations. A strategy proposed by several authors for establishing the trustworthiness of the qualitative inquiry is the development of a research audit trail. The audit trail enables readers to trace through a researcher’s logic and determine whether the study’s findings may be relied upon as a platform for further enquiry. While recommended in theory, this strategy is rarely implemented in practice. This paper examines the role of the research audit trail in improving the trustworthiness of qualitative research. Further, it documents the development of an audit trail for an empirical qualitative research study that centred on an interpretive evaluation of a new Information and Communication Technology (ICT) student administrative system in the tertiary education sector in the Republic of Ireland. This research study examined the impact of system introduction across five Institutes of Technology (IoTs) through case study research that incorporated multiple evidence sources. The evidence collected was analysed using a grounded theory method, which was supported by qualitative data analysis software. The key concepts and categories that emerged from this process were synthesized into a cross case primary narrative; through reflection the primary narrative was reduced to a higher order narrative that presented the principle findings or key research themes. From this higher order narrative a theoretical conjecture was distilled. Both a physical and intellectual audit trail for this study are presented in this paper. The physical audit trail documents all keys stages of a research study and reflects the key research methodology decisions. The intellectual audit trail, on the other hand, outlines how a researcher’s thinking evolved throughout all phases of the study. Hence, these audit trails make transparent the key decisions taken throughout the research process. The paper concludes by discussing the value of this audit trail process in confirming a qualitative study’s findings.", "title": "" }, { "docid": "b62a3684114bc6d7c9e5c27bb384d0b2", "text": "Long on-chip wires pose well-known latency, bandwidth, and energy challenges to the designers of high-performance VLSI systems. Repeaters effectively mitigate wire RC effects but do little to improve their energy costs. Moreover, proliferating repeater farms add significant complexity to full-chip integration, motivating circuits to improve wire performance and energy while reducing the number of repeaters. Such methods include capacitive-mode signaling, which combines a capacitive driver with a capacitive load [1,2]; and current-mode signaling, which pairs a resistive driver with a resistive load [3,4]. While both can significantly improve wire performance, capacitive drivers offer added benefits of reduced voltage swing on the wire and intrinsic driver pre-emphasis. As wires scale, slow slew rates on highly resistive interconnects will still limit wire performance due to inter-symbol interference (ISI) [5]. Further improvements can come from equalization circuits on receivers [2] and transmitters [4] that trade off power for bandwidth. In this paper, we extend these ideas to a capacitively driven pulse-mode wire using a transmit-side adaptive FIR filter and a clockless receiver, and show bandwidth densities of 2.2–4.4 Gb/s/µm over 90nm 5mm links, with corresponding energies of 0.24–0.34 pJ/bit on random data.", "title": "" }, { "docid": "d61094fb93deadb6c5fa2856fca267db", "text": "We present a new design for a 1-b full adder featuring hybrid-CMOS design style. The quest to achieve a good-drivability, noise-robustness, and low-energy operations for deep submicrometer guided our research to explore hybrid-CMOS style design. Hybrid-CMOS design style utilizes various CMOS logic style circuits to build new full adders with desired performance. This provides the designer a higher degree of design freedom to target a wide range of applications, thus significantly reducing design efforts. We also classify hybrid-CMOS full adders into three broad categories based upon their structure. Using this categorization, many full-adder designs can be conceived. We will present a new full-adder design belonging to one of the proposed categories. The new full adder is based on a novel xor-xnor circuit that generates xor and xnor full-swing outputs simultaneously. This circuit outperforms its counterparts showing 5%-37% improvement in the power-delay product (PDP). A novel hybrid-CMOS output stage that exploits the simultaneous xor-xnor signals is also proposed. This output stage provides good driving capability enabling cascading of adders without the need of buffer insertion between cascaded stages. There is approximately a 40% reduction in PDP when compared to its best counterpart. During our experimentations, we found out that many of the previously reported adders suffered from the problems of low swing and high noise when operated at low supply voltages. The proposed full adder is energy efficient and outperforms several standard full adders without trading off driving capability and reliability. The new full-adder circuit successfully operates at low voltages with excellent signal integrity and driving capability. To evaluate the performance of the new full adder in a real circuit, we embedded it in a 4- and 8-b, 4-operand carry-save array adder with final carry-propagate adder. The new adder displayed better performance as compared to the standard full adders", "title": "" }, { "docid": "e3459bb93bb6f7af75a182472bb42b3e", "text": "We consider the algorithmic problem of selecting a set of target nodes that cause the biggest activation cascade in a network. In case when the activation process obeys the diminishing return property, a simple hill-climbing selection mechanism has been shown to achieve a provably good performance. Here we study models of influence propagation that exhibit critical behavior and where the property of diminishing returns does not hold. We demonstrate that in such systems the structural properties of networks can play a significant role. We focus on networks with two loosely coupled communities and show that the double-critical behavior of activation spreading in such systems has significant implications for the targeting strategies. In particular, we show that simple strategies that work well for homogenous networks can be overly suboptimal and suggest simple modification for improving the performance by taking into account the community structure.", "title": "" }, { "docid": "88e582927c4e4018cb4071eeeb6feff4", "text": "While previous studies have correlated the Dark Triad traits (i.e., narcissism, psychopathy, and Machiavellianism) with a preference for short-term relationships, little research has addressed possible correlations with short-term relationship sub-types. In this online study using Amazon’s Mechanical Turk system (N = 210) we investigated the manner in which scores on the Dark Triad relate to the selection of different mating environments using a budget-allocation task. Overall, the Dark Triad were positively correlated with preferences for short-term relationships and negatively correlated with preferences for a long-term relationship. Specifically, narcissism was uniquely correlated with preferences for one-night stands and friends-with-benefits and psychopathy was uniquely correlated with preferences for bootycall relationships. Both narcissism and psychopathy were negatively correlated with preferences for serious romantic relationships. In mediation analyses, psychopathy partially mediated the sex difference in preferences for booty-call relationships and narcissism partially mediated the sex difference in preferences for one-night stands. In addition, the sex difference in preference for serious romantic relationships was partially mediated by both narcissism and psychopathy. It appears the Dark Triad traits facilitate the adoption of specific mating environments providing fit with people’s personality traits. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cd0e7cace1b89af72680f9d8ef38bdf3", "text": "Analyzing stock market trends and sentiment is an interdisciplinary area of research being undertaken by many disciplines such as Finance, Computer Science, Statistics, and Economics. It has been well established that real time news plays a strong role in the movement of stock prices. With the advent of electronic and online news sources, analysts have to deal with enormous amounts of real-time, unstructured streaming data. In this paper, we present an automated text mining based approach to aggregate news stories from diverse sources and create a News Corpus. The Corpus is filtered down to relevant sentences and analyzed using Natural Language Processing (NLP) techniques. A sentiment metric, called NewsSentiment, utilizing the count of positive and negative polarity words is proposed as a measure of the sentiment of the overall news corpus. We have used various open source packages and tools to develop the news collection and aggregation engine as well as the sentiment evaluation engine. Extensive experimentation has been done using news stories about various stocks. The time variation of NewsSentiment shows a very strong correlation with the actual stock price movement. Our proposed metric has many applications in analyzing current news stories and predicting stock trends for specific companies and sectors of the economy.", "title": "" } ]
scidocsrr
4f47fe533881b91eb0338a29e3fd64d0
Query expansion via WordNet for effective code search
[ { "docid": "bd95c9cf4ca8b57cfae4da13c5750244", "text": "Feature location is the activity of identifying an initial location in the source code that implements functionality in a software system. Many feature location techniques have been introduced that automate some or all of this process, and a comprehensive overview of this large body of work would be beneficial to researchers and practitioners. This paper presents a systematic literature survey of feature location techniques. Eighty-nine articles from 25 venues have been reviewed and classified within the taxonomy in order to organize and structure existing work in the field of feature location. The paper also discusses open issues and defines future directions in the field of feature location.", "title": "" } ]
[ { "docid": "07d8ec2d95e09e6efb822430d7001b62", "text": "This paper presents a multilevel inverter that has been conceptualized to reduce component count, particularly for a large number of output levels. It comprises floating input dc sources alternately connected in opposite polarities with one another through power switches. Each input dc level appears in the stepped load voltage either individually or in additive combinations with other input levels. This approach results in reduced number of power switches as compared to classical topologies. The working principle of the proposed topology is demonstrated with the help of a single-phase five-level inverter. The topology is investigated through simulations and validated experimentally on a laboratory prototype. An exhaustive comparison of the proposed topology is made against the classical cascaded H-bridge topology.", "title": "" }, { "docid": "e56bc26cd567aff51de3cb47f9682149", "text": "Recent technological advances have expanded the breadth of available omic data, from whole-genome sequencing data, to extensive transcriptomic, methylomic and metabolomic data. A key goal of analyses of these data is the identification of effective models that predict phenotypic traits and outcomes, elucidating important biomarkers and generating important insights into the genetic underpinnings of the heritability of complex traits. There is still a need for powerful and advanced analysis strategies to fully harness the utility of these comprehensive high-throughput data, identifying true associations and reducing the number of false associations. In this Review, we explore the emerging approaches for data integration — including meta-dimensional and multi-staged analyses — which aim to deepen our understanding of the role of genetics and genomics in complex outcomes. With the use and further development of these approaches, an improved understanding of the relationship between genomic variation and human phenotypes may be revealed.", "title": "" }, { "docid": "26aecc52cd3e4eaec05011333a9a7814", "text": "This paper introduces the concept of letting an RDBMS Optimizer optimize its own environment. In our project, we have used the DB2 Optimizer to tackle the index selection problem, a variation of the knapack problem. This paper will discuss our implementation of index recommendation, the user interface, and provide measurements on the quality of the recommended indexes.", "title": "" }, { "docid": "d859fb8570c91206708b7b2b8f5eedcb", "text": "In this article, we describe a method for overlaying arbitrary texture image onto surface of T-shirt worn by a user. In this method, the texture image is previously divided into a number of patches. On the T-shirt, markers are printed at the positions corresponding to the vertices of the patches. The markers on the surface of the T-shirt are tracked in the motion image taken by a camera. The texture image is warped according to the tracked positions of the markers, which is overlaid onto the captured image. This article presents experimental results with the pilot system of virtual clothing implemented based on the proposed method.", "title": "" }, { "docid": "dbae7d7a7560d2ea4a8b6e0772cc87c0", "text": "BACKGROUND\nPlacebo and nocebo effects occur in clinical or laboratory medical contexts after administration of an inert treatment or as part of active treatments and are due to psychobiological mechanisms such as expectancies of the patient. Placebo and nocebo studies have evolved from predominantly methodological research into a far-reaching interdisciplinary field that is unravelling the neurobiological, behavioural and clinical underpinnings of these phenomena in a broad variety of medical conditions. As a consequence, there is an increasing demand from health professionals to develop expert recommendations about evidence-based and ethical use of placebo and nocebo effects for clinical practice.\n\n\nMETHODS\nA survey and interdisciplinary expert meeting by invitation was organized as part of the 1st Society for Interdisciplinary Placebo Studies (SIPS) conference in 2017. Twenty-nine internationally recognized placebo researchers participated.\n\n\nRESULTS\nThere was consensus that maximizing placebo effects and minimizing nocebo effects should lead to better treatment outcomes with fewer side effects. Experts particularly agreed on the importance of informing patients about placebo and nocebo effects and training health professionals in patient-clinician communication to maximize placebo and minimize nocebo effects.\n\n\nCONCLUSIONS\nThe current paper forms a first step towards developing evidence-based and ethical recommendations about the implications of placebo and nocebo research for medical practice, based on the current state of evidence and the consensus of experts. Future research might focus on how to implement these recommendations, including how to optimize conditions for educating patients about placebo and nocebo effects and providing training for the implementation in clinical practice.", "title": "" }, { "docid": "4db9cf56991edae0f5ca34546a8052c4", "text": "This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is \"good\" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are:", "title": "" }, { "docid": "9fc3cd6152d46e9a550e6ab332b1f636", "text": "Focusing on the freight-train dominant electrical railway power system (ERPS) mixed with ac-dc and ac-dc-ac locomotives (its power factorε[0.70,0.84]), this paper proposes a power factor-oriented railway power flow controller (RPFC) for the power quality improvement of ERPS. The comprehensive relationship of the primary power factor, converter capacity, and the two-phase load currents is built in this paper. Besides, as the main contribution of this paper, the optimal compensating strategy that suited the random fluctuated two-phase loads is analyzed and designed based on a real traction substation, for the purposes of satisfying the power quality standard, enhancing RPFC's control flexibility, and decreasing converter's capacity. Finally, both the simulation and the experiment are used to validate the proposed conceive.", "title": "" }, { "docid": "b3a85b88e4a557fcb7f0efb6ba628418", "text": "We present the bilateral solver, a novel algorithm for edgeaware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth superresolution, colorization, and semantic segmentation) while being 10-1000× faster than baseline techniques with comparable accuracy, and producing lower-error output than techniques with comparable runtimes. The bilateral solver is fast, robust, straightforward to generalize to new domains, and simple to integrate into deep learning pipelines.", "title": "" }, { "docid": "d00df5e0c5990c05d5a67e311586a68a", "text": "The present research explored the controversial link between global self-esteem and externalizing problems such as aggression, antisocial behavior, and delinquency. In three studies, we found a robust relation between low self-esteem and externalizing problems. This relation held for measures of self-esteem and externalizing problems based on self-report, teachers' ratings, and parents' ratings, and for participants from different nationalities (United States and New Zealand) and age groups (adolescents and college students). Moreover, this relation held both cross-sectionally and longitudinally and after controlling for potential confounding variables such as supportive parenting, parent-child and peer relationships, achievement-test scores, socioeconomic status, and IQ. In addition, the effect of self-esteem on aggression was independent of narcissism, an important finding given recent claims that individuals who are narcissistic, not low in self-esteem, are aggressive. Discussion focuses on clarifying the relations among self-esteem, narcissism, and externalizing problems.", "title": "" }, { "docid": "4f4817fd70f62b15c0b52311fa677a64", "text": "Active plasmonics is a burgeoning and challenging subfield of plasmonics. It exploits the active control of surface plasmon resonance. In this review, a first-ever in-depth description of the theoretical relationship between surface plasmon resonance and its affecting factors, which forms the basis for active plasmon control, will be presented. Three categories of active plasmonic structures, consisting of plasmonic structures in tunable dielectric surroundings, plasmonic structures with tunable gap distances, and self-tunable plasmonic structures, will be proposed in terms of the modulation mechanism. The recent advances and current challenges for these three categories of active plasmonic structures will be discussed in detail. The flourishing development of active plasmonic structures opens access to new application fields. A significant part of this review will be devoted to the applications of active plasmonic structures in plasmonic sensing, tunable surface-enhanced Raman scattering, active plasmonic components, and electrochromic smart windows. This review will be concluded with a section on the future challenges and prospects for active plasmonics.", "title": "" }, { "docid": "b8a9b4ed7319f11198791a178cb17d7f", "text": "Semantic relation classification remains a challenge in natural language processing. In this paper, we introduce a hierarchical recurrent neural network that is capable of extracting information from raw sentences for relation classification. Our model has several distinctive features: (1) Each sentence is divided into three context subsequences according to two annotated nominals, which allows the model to encode each context subsequence independently so as to selectively focus as on the important context information; (2) The hierarchical model consists of two recurrent neural networks (RNNs): the first one learns context representations of the three context subsequences respectively, and the second one computes semantic composition of these three representations and produces a sentence representation for the relationship classification of the two nominals. (3) The attention mechanism is adopted in both RNNs to encourage the model to concentrate on the important information when learning the sentence representations. Experimental results on the SemEval-2010 Task 8 dataset demonstrate that our model is comparable to the state-of-the-art without using any hand-crafted features.", "title": "" }, { "docid": "cf2a39b684233cff193238114816efde", "text": "In this paper, we propose an Extensible Markup Language (XML)-based multiagent recommender system for supporting online recruitment services. Our system is characterized by the following features: 1) it handles user profiles for personalizing the job search over the Internet; 2) it is based on the intelligent agent technology; and 3) it uses XML for guaranteeing a light, versatile, and standard mechanism for information representation, storing, and exchange. This paper discusses the basic features of the proposed system, presents the results of an experimental study we have carried out for evaluating its performance, and makes a comparison between the proposed system and other e-recruitment systems already presented in the past.", "title": "" }, { "docid": "89c73ad6eb23c016f6e67e4f62dd22a3", "text": "A new approach for improving the power efficiency of the conventional four-phase charge pump is presented. Based on the multi-step capacitor charging and the charge sharing concept, the charge pump design is able to reduce the overall power consumption by 35% compared to the conventional four-phase charge pump and by 15% compared to a charge sharing charge pump, for an output current of 200μA with 12V output voltage.", "title": "" }, { "docid": "0dbb5f492e6e2336abea6bf8ce6ee3cc", "text": "This paper presents a Lie group setting for the problem of control of formations, as a natural outcome of the analysis of a planar two-vehicle formation control law. The vehicle trajectories are described using planar Frenet-Serret equations of motion, which capture the evolution of both the vehicle position and orientation for unit-speed motion subject to curvature (steering) control. The set of all possible (relative) equilibria for arbitrary G-invariant curvature controls is described (where G = SE(2) is a symmetry group for the control law). A generalization of the control law for n vehicles is presented, and the corresponding (relative) equilibria are characterized. Work is on-going to discover stability and convergence results for the n-vehicle problem. The practical motivation for this work is the problem of formation control for meter-scale UAVs; therefore, an implementation approach consistent with UAV payload constraints is also discussed.", "title": "" }, { "docid": "0cd1400bce31ea35b3f142339737dc28", "text": "LLC resonant converter is a nonlinear system, limiting the use of typical linear control methods. This paper proposed a new nonlinear control strategy, using load feedback linearization for an LLC resonant converter. Compared with the conventional PI controllers, the proposed feedback linearized control strategy can achieve better performance with elimination of the nonlinear characteristics. The LLC resonant converter's dynamic model is built based on fundamental harmonic approximation using extended describing function. By assuming the dynamics of resonant network is much faster than the output voltage and controller, the LLC resonant converter's model is simplified from seven-order state equations to two-order ones. Then, the feedback linearized control strategy is presented. A double loop PI controller is designed to regulate the modulation voltage. The switching frequency can be calculated as a function of the load, input voltage, and modulation voltage. Finally, a 200 W laboratory prototype is built to verify the proposed control scheme. The settling time of the LLC resonant converter is reduced from 38.8 to 20.4 ms under the positive load step using the proposed controller. Experimental results prove the superiority of the proposed feedback linearized controller over the conventional PI controller.", "title": "" }, { "docid": "7d7c596d334153f11098d9562753a1ee", "text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.", "title": "" }, { "docid": "73f8a5e5e162cc9b1ed45e13a06e78a5", "text": "Two major projects in the U.S. and Europe have joined in a collaboration to work toward achieving interoperability among language resources. In the U.S., the project, Sustainable Interoperability for Language Technology (SILT) has been funded by the National Science Foundation under the INTEROP program, and in Europe, FLaReNet, Fostering Language Resources Network, has been funded by the European Commission under the eContentPlus framework. This international collaborative effort involves members of the language processing community and others working in related areas to build consensus regarding the sharing of data and technologies for language resources and applications, to work towards interoperability of existing data, and, where possible, to promote standards for annotation and resource building. This paper focuses on the results of a recent workshop whose goal was to arrive at operational definitions for interoperability over four thematic areas, including metadata for describing language resources, data categories and their semantics, resource publication requirements, and software sharing.", "title": "" }, { "docid": "96b0acb9a28c8823e66e1384e8ec5f6f", "text": "This paper presents a visual inspection system aimed at the automatic detection and classification of bare-PCB manufacturing errors. The interest of this CAE system lies in a twofold approach. On the one hand, we propose a modification of the subtraction method based on reference images that allows higher performance in the process of defect detection. On the other hand, this method is combined with a particle classification algorithm based on two measures of light intensity. As a result of this strategy, a machine vision application has been implemented to assist people in etching, inspection and verification tasks of PCBs.", "title": "" }, { "docid": "0ff3e49a700a776c1a8f748d78bc4b73", "text": "Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.", "title": "" }, { "docid": "5b5345a894d726186ba7f6baf76cb65e", "text": "In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate approach would be to look for loss functions that are inherently noise-tolerant. For binary classification there exist theoretical results on loss functions that are robust to label noise. In this paper, we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. These results generalize the existing results on noise-tolerant loss functions for binary classification. We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise. Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.", "title": "" } ]
scidocsrr
9735b3ffd294a64cda6eda11204e0867
$1.00 per RT #BostonMarathon #PrayForBoston: Analyzing fake content on Twitter
[ { "docid": "56c5ec77f7b39692d8b0d5da0e14f82a", "text": "Using tweets extracted from Twitter during the Australian 2010-2011 floods, social network analysis techniques were used to generate and analyse the online networks that emerged at that time. The aim was to develop an understanding of the online communities for the Queensland, New South Wales and Victorian floods in order to identify active players and their effectiveness in disseminating critical information. A secondary goal was to identify important online resources disseminated by these communities. Important and effective players during the Queensland floods were found to be: local authorities (mainly the Queensland Police Services), political personalities (Queensland Premier, Prime Minister, Opposition Leader, Member of Parliament), social media volunteers, traditional media reporters, and people from not-for-profit, humanitarian, and community associations. A range of important resources were identified during the Queensland flood; however, they appeared to be of a more general information nature rather than vital information and updates on the disaster. Unlike Queensland, there was no evidence of Twitter activity from the part of local authorities and the government in the New South Wales and Victorian floods. Furthermore, the level of Twitter activity during the NSW floods was almost nil. Most of the active players during the NSW and Victorian floods were volunteers who were active during the Queensland floods. Given the positive results obtained by the active involvement of the local authorities and government officials in Queensland, and the increasing adoption of Twitter in other parts of the world for emergency situations, it seems reasonable to push for greater adoption of Twitter from local and federal authorities Australia-wide during periods of mass emergencies.", "title": "" }, { "docid": "fae9db6e3522ec00793613abc3617dcc", "text": "Size, accessibility, and rate of growth of Online Social Media (OSM) has attracted cyber crimes through them. One form of cyber crime that has been increasing steadily is phishing, where the goal (for the phishers) is to steal personal information from users which can be used for fraudulent purposes. Although the research community and industry has been developing techniques to identify phishing attacks through emails and instant messaging (IM), there is very little research done, that provides a deeper understanding of phishing in online social media. Due to constraints of limited text space in social systems like Twitter, phishers have begun to use URL shortener services. In this study, we provide an overview of phishing attacks for this new scenario. One of our main conclusions is that phishers are using URL shorteners not only for reducing space but also to hide their identity. We observe that social media websites like Facebook, Habbo, Orkut are competing with e-commerce services like PayPal, eBay in terms of traffic and focus of phishers. Orkut, Habbo, and Facebook are amongst the top 5 brands targeted by phishers. We study the referrals from Twitter to understand the evolving phishing strategy. A staggering 89% of references from Twitter (users) are inorganic accounts which are sparsely connected amongst themselves, but have large number of followers and followees. We observe that most of the phishing tweets spread by extensive use of attractive words and multiple hashtags. To the best of our knowledge, this is the first study to connect the phishing landscape using blacklisted phishing URLs from PhishTank, URL statistics from bit.ly and cues from Twitter to track the impact of phishing in online social media.", "title": "" }, { "docid": "81387b0f93b68e8bd6a56a4fd81477e9", "text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.", "title": "" } ]
[ { "docid": "b5c7b9f1f57d3d79d3fc8a97eef16331", "text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.", "title": "" }, { "docid": "b4a50e35ce775166e50c6bcbad650a4b", "text": "Sentiment analysis is a language processing task which is used to find out opinion expressed in online reviews to categorize it into different classes like positive, negative or neutral. The paper aims to summarize the movie reviews at aspect level so that user can easily find out which aspects of movie are liked and disliked by user. Before finding aspect and its respective opinion of movie, proposed system performs subjectivity analysis. Subjectivity analysis is one of the important and useful tasks in sentiment analysis. Online reviews may consist of both objective and subjective sentences. Among these, objective sentences consist of only factual information and no sentiments or opinion. Hence subjective sentences are considered for further processing i.e. to find feature-opinion pair and to find summery at aspect level. In this paper, two different methods are implemented for finding subjectivity of sentences and then rule based system is used to find feature-opinion pair and finally the orientation of extracted opinion is revealed using two different method. Initially the proposed system uses SentiWordNet approach to find out orientation of extracted opinion and then it uses the method which is based on lexicon consisting list of positive and negative words.", "title": "" }, { "docid": "19c6f2b03624f41acc5fb060bff04c64", "text": "Estimation of binocular disparity in vision systems is typically based on a matching pipeline and rectification. Estimation of disparity in the brain, in contrast, is widely assumed to be based on the comparison of local phase information from binocular receptive fields. The classic binocular energy model shows that this requires the presence of local quadrature pairs within the eye which show phaseor position-shifts across the eyes. While numerous theoretical accounts of stereopsis have been based on these observations, there has been little work on how energy models and depth inference may emerge through learning from the statistics of image pairs. Here, we describe a probabilistic, deep learning approach to modeling disparity and a methodology for generating binocular training data to estimate model parameters. We show that within-eye quadrature filters occur as a result of fitting the model to data, and we demonstrate how a three-layer network can learn to infer depth entirely from training data. We also show how training energy models can provide depth cues that are useful for recognition. We also show that pooling over more than two filters leads to richer dependencies between the learned filters.", "title": "" }, { "docid": "7076f898c65a0e93a94357b757f92fc8", "text": "Understanding how to control how the brain's functioning mediates mental experience and the brain's processing to alter cognition or disease are central projects of cognitive and neural science. The advent of real-time functional magnetic resonance imaging (rtfMRI) now makes it possible to observe the biology of one's own brain while thinking, feeling and acting. Recent evidence suggests that people can learn to control brain activation in localized regions, with corresponding changes in their mental operations, by observing information from their brain while inside an MRI scanner. For example, subjects can learn to deliberately control activation in brain regions involved in pain processing with corresponding changes in experienced pain. This may provide a novel, non-invasive means of observing and controlling brain function, potentially altering cognitive processes or disease.", "title": "" }, { "docid": "95155fb36a6be4483d007be8882d4332", "text": "The last decade has seen the rise of immense online social networks (OSNs) such as MySpace and Facebook. In this paper we use epidemiological models to explain user adoption and abandonment of OSNs, where adoption is analogous to infection and abandonment is analogous to recovery. We modify the traditional SIR model of disease spread by incorporating infectious recovery dynamics such that contact between a recovered and infected member of the population is required for recovery. The proposed infectious recovery SIR model (irSIR model) is validated using publicly available Google search query data for “MySpace” as a case study of an OSN that has exhibited both adoption and abandonment phases. The irSIR model is then applied to search query data for “Facebook,” which is just beginning to show the onset of an abandonment phase. Extrapolating the best fit model into the future predicts a rapid decline in Facebook activity in the next few years.", "title": "" }, { "docid": "539a25209bf65c8b26cebccf3e083cd0", "text": "We study the problem of web search result diversification in the case where intent based relevance scores are available. A diversified search result will hopefully satisfy the information need of user-L.s who may have different intents. In this context, we first analyze the properties of an intent-based metric, ERR-IA, to measure relevance and diversity altogether. We argue that this is a better metric than some previously proposed intent aware metrics and show that it has a better correlation with abandonment rate. We then propose an algorithm to rerank web search results based on optimizing an objective function corresponding to this metric and evaluate it on shopping related queries.", "title": "" }, { "docid": "494ed6efac81a9e8bbdbfa9f19a518d3", "text": "We studied the possibilities of embroidered antenna-IC interconnections and contour antennas in passive ultrahigh-frequency radio-frequency identification textile tags. The tag antennas were patterned from metal-coated fabrics and embroidered with conductive yarn. The wireless performance of the tags with embroidered antenna-IC interconnections was evaluated through measurements, and the results were compared to identical tags, where the ICs were attached using regular conductive epoxy. Our results show that the textile tags with embroidered antenna-IC interconnections attained similar performance. In addition, the tags where only the borderlines of the antennas were embroidered showed excellent wireless performance.", "title": "" }, { "docid": "616d20b1359cc1cf4fcfb1a0318d721e", "text": "The Burj Khalifa Project is the tallest structure ever built by man; the tower is 828 meters tall and compromise of 162 floors above grade and 3 basement levels. Early integration of aerodynamic shaping and wind engineering played a major role in the architectural massing and design of this multi-use tower, where mitigating and taming the dynamic wind effects was one of the most important design criteria set forth at the onset of the project design. This paper provides brief description of the tower structural systems, focuses on the key issues considered in construction planning of the key structural components, and briefly outlines the execution of one of the most comprehensive structural health monitoring program in tall buildings.", "title": "" }, { "docid": "feed47e790a1034e3909359408b4bf00", "text": "1Biological Systems EngineeringDepartment, Center for Precision andAutomatedAgricultural Systems,Washington StateUniversity, USA 2School ofMechanical andMaterials Engineering,Washington StateUniversity, USA 3Biological Systems EngineeringDepartment, Center for Precision andAutomatedAgricultural Systems,Washington StateUniversity, USA 4School ofMechanical andMaterials Engineering,Washington StateUniversity, USA 5Biological Systems EngineeringDepartment, Center for Precision andAutomatedAgricultural Systems,Washington StateUniversity, USA 6Extension, Center for Precision andAutomatedAgricultural Systems,Washington State University, USA Correspondence ManojKarkee, 24106NBunnRd,CPAAS, Prosser,WA99350,USA. Email:manoj.karkee@wsu.edu ∗Theseauthors contributedequally to thiswork Abstract Every apple destined for the freshmarket is pickedby thehumanhand.Despite extensive research over the past four decades, there are no mechanical apple harvesters for the fresh market commercially available, which is a significant concern because of increasing uncertainty about the availability of manual labor and rising production costs. The highly unstructured orchard environment has been a major challenge to the development of commercially viable robotic harvesting systems. This paper reports the design and field evaluation of a robotic apple harvester. The approach adoptedwas to use a low-cost system to assess required sensing, planning, andmanipulation functionality in a modern orchard system with a planar canopy. The system was tested in a commercial apple orchard inWashington State.Workspacemodifications and performance criteria are thoroughly defined and reported to help evaluate the approach and guide future enhancements. The machine vision system was accurate and had an average localization time of 1.5 s per fruit. The seven degree of freedom harvesting system successfully picked 127 of the 150 fruit attempted for an overall success rate of 84%with an average picking time of 6.0 s per fruit. Future work will include integration of additional sensing and obstacle detection for improved system robustness.", "title": "" }, { "docid": "0b8ec67f285c4186866f42305dfb7cf2", "text": "Some deep convolutional neural networks were proposed for time-series classification and class imbalanced problems. However, those models performed degraded and even failed to recognize the minority class of an imbalanced temporal sequences dataset. Minority samples would bring troubles for temporal deep learning classifiers due to the equal treatments of majority and minority class. Until recently, there were few works applying deep learning on imbalanced time-series classification (ITSC) tasks. Here, this paper aimed at tackling ITSC problems with deep learning. An adaptive cost-sensitive learning strategy was proposed to modify temporal deep learning models. Through the proposed strategy, classifiers could automatically assign misclassification penalties to each class. In the experimental section, the proposed method was utilized to modify five neural networks. They were evaluated on a large volume, real-life and imbalanced time-series dataset with six metrics. Each single network was also tested alone and combined with several mainstream data samplers. Experimental results illustrated that the proposed costsensitive modified networks worked well on ITSC tasks. Compared to other methods, the cost-sensitive convolution neural network and residual network won out in the terms of all metrics. Consequently, the proposed cost-sensitive learning strategy can be used to modify deep learning classifiers from cost-insensitive to costsensitive. Those cost-sensitive convolutional networks can be effectively applied to address ITSC issues.", "title": "" }, { "docid": "c3cb261d9dc6b92a6e69e4be7ec44978", "text": "An increasing number of studies in political communication focus on the “sentiment” or “tone” of news content, political speeches, or advertisements. This growing interest in measuring sentiment coincides with a dramatic increase in the volume of digitized information. Computer automation has a great deal of potential in this new media environment. The objective here is to outline and validate a new automated measurement instrument for sentiment analysis in political texts. Our instrument uses a dictionary-based approach consisting of a simple word count of the frequency of keywords in a text from a predefined dictionary. The design of the freely available Lexicoder Sentiment Dictionary (LSD) is discussed in detail here. The dictionary is tested against a body of human-coded news content, and the resulting codes are also compared to results from nine existing content-analytic dictionaries. Analyses suggest that the LSD produces results that are more systematically related to human coding than are results based on the other available dictionaries. The LSD is thus a useful starting point for a revived discussion about dictionary construction and validation in sentiment analysis for political communication.", "title": "" }, { "docid": "764840c288985e0257413c94205d2bf2", "text": "Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.", "title": "" }, { "docid": "f3a08d4f896f7aa2d0f1fff04764efc3", "text": "The natural distribution of textual data used in text classification is often imbalanced. Categories with fewer examples are under-represented and their classifiers often perform far below satisfactory. We tackle this problem using a simple probability based term weighting scheme to better distinguish documents in minor categories. This new scheme directly utilizes two critical information ratios, i.e. relevance indicators. Such relevance indicators are nicely supported by probability estimates which embody the category membership. Our experimental study using both Support Vector Machines and Naı̈ve Bayes classifiers and extensive comparison with other classic weighting schemes over two benchmarking data sets, including Reuters-21578, shows significant improvement for minor categories, while the performance for major categories are not jeopardized. Our approach has suggested a simple and effective solution to boost the performance of text classification over skewed data sets. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4d16c9c38837adc8f3b36031871f1048", "text": "We present a frequency modulated continuous wave (FMCW) multiple input multiple output (MIMO) radar demonstrator system operating in the W-band at frequencies around 100 GHz. It consists of a two dimensional sparse array together with hardware for signal generation and image reconstruction that we will describe in more detail. The geometry of the sparse array was designed with the help of simulations to the aim of imaging at distances of just a few up to more than 150 meters. The FMCW principle is used to extract range information. To obtain information in both cross-range directions a back-propagation algorithm is used and further explained in this paper. Finally, we will present first measurements and explain the calibration process.", "title": "" }, { "docid": "0a336e7f74202a1121be09c6743f3006", "text": "Available online 24 August 2010", "title": "" }, { "docid": "cb985a5ede945041a9f418ef4b3f23e1", "text": "Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (<200) and average (over all sizes of networks), SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .", "title": "" }, { "docid": "2b314587816255285bf985a086719572", "text": "Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety.", "title": "" }, { "docid": "51c14998480e2b1063b727bf3e4f4ad0", "text": "With the rapid growth of multimedia information, the font library has become a part of people’s work life. Compared to the Western alphabet language, it is difficult to create new font due to huge quantity and complex shape. At present, most of the researches on automatic generation of fonts use traditional methods requiring a large number of rules and parameters set by experts, which are not widely adopted. This paper divides Chinese characters into strokes and generates new font strokes by fusing the styles of two existing font strokes and assembling them into new fonts. This approach can effectively improve the efficiency of font generation, reduce the costs of designers, and is able to inherit the style of existing fonts. In the process of learning to generate new fonts, the popular of deep learning areas, Generative Adversarial Nets has been used. Compared with the traditional method, it can generate higher quality fonts without well-designed and complex loss function.", "title": "" }, { "docid": "404eca4e0a9697aea608184589f8ebb4", "text": "Cloud storage services allow users to outsource their data to cloud servers to save local data storage costs. However, unlike using local storage devices, users do not physically manage the data stored on cloud servers; therefore, the data integrity of the outsourced data has become an issue. Many public verification schemes have been proposed to enable a third-party auditor to verify the data integrity for users. These schemes make an impractical assumption—the auditors have enough computation capability to bear expensive verification costs. In this paper, we propose a novel public verification scheme for the cloud storage using indistinguishability obfuscation, which requires a lightweight computation on the auditor and the delegate most computation to the cloud. We further extend our scheme to support batch verification and data dynamic operations, where multiple verification tasks from different users can be performed efficiently by the auditor and the cloud-stored data can be updated dynamically. Compared with other existing works, our scheme significantly reduces the auditor’s computation overhead. Moreover, the batch verification overhead on the auditor side in our scheme is independent of the number of verification tasks. Our scheme could be practical in a scenario, where the data integrity verifications are executed frequently, and the number of verification tasks (i.e., the number of users) is numerous; even if the auditor is equipped with a low-power device, it can verify the data integrity efficiently. We prove the security of our scheme under the strongest security model proposed by Shi et al. (ACM CCS 2013). Finally, we conduct a performance analysis to demonstrate that our scheme is more efficient than other existing works in terms of the auditor’s communication and computation efficiency.", "title": "" }, { "docid": "95037e7dc3ae042d64a4b343ad4efd39", "text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.", "title": "" } ]
scidocsrr
b89a8b2b2ba7e81a1852b9eaba5b5292
Neural Networks Incorporating Dictionaries for Chinese Word Segmentation
[ { "docid": "8aefd572e089cb29c13cefc6e59bdda8", "text": "Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS). Most existing methods focus on improve the performance for each single criterion. However, it is interesting to exploit these different criteria and mining their common underlying knowledge. In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria. Experiments on eight corpora with heterogeneous segmentation criteria show that the performance of each corpus obtains a significant improvement, compared to single-criterion learning. Source codes of this paper are available on Github1.", "title": "" } ]
[ { "docid": "08a51c92421f73dd9248e0b553832d53", "text": "We introduce a dataset for facilitating audio-visual analysis of music performances. The dataset comprises 44 simple multi-instrument classical music pieces assembled from coordinated but separately recorded performances of individual tracks. For each piece, we provide the musical score in MIDI format, the audio recordings of the individual tracks, the audio and video recording of the assembled mixture, and ground-truth annotation files including frame-level and note-level transcriptions. We describe our methodology for the creation of the dataset, particularly highlighting our approaches to address the challenges involved in maintaining synchronization and expressiveness. We demonstrate the high quality of synchronization achieved with our proposed approach by comparing the dataset with existing widely used music audio datasets. We anticipate that the dataset will be useful for the development and evaluation of existing music information retrieval (MIR) tasks, as well as for novel multimodal tasks. We benchmark two existing MIR tasks (multipitch analysis and score-informed source separation) on the dataset and compare them with other existing music audio datasets. In addition, we consider two novel multimodal MIR tasks (visually informed multipitch analysis and polyphonic vibrato analysis) enabled by the dataset and provide evaluation measurements and baseline systems for future comparisons (from our recent work). Finally, we propose several emerging research directions that the dataset enables.", "title": "" }, { "docid": "8e3b1f49ca8a5afe20a9b66e0088a56a", "text": "Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.", "title": "" }, { "docid": "6171a708ea6470b837439ad23af90dff", "text": "Cardiovascular diseases represent a worldwide relevant socioeconomical problem. Cardiovascular disease prevention relies also on lifestyle changes, including dietary habits. The cardioprotective effects of several foods and dietary supplements in both animal models and in humans have been explored. It was found that beneficial effects are mainly dependent on antioxidant and anti-inflammatory properties, also involving modulation of mitochondrial function. Resveratrol is one of the most studied phytochemical compounds and it is provided with several benefits in cardiovascular diseases as well as in other pathological conditions (such as cancer). Other relevant compounds are Brassica oleracea, curcumin, and berberine, and they all exert beneficial effects in several diseases. In the attempt to provide a comprehensive reference tool for both researchers and clinicians, we summarized in the present paper the existing literature on both preclinical and clinical cardioprotective effects of each mentioned phytochemical. We structured the discussion of each compound by analyzing, first, its cellular molecular targets of action, subsequently focusing on results from applications in both ex vivo and in vivo models, finally discussing the relevance of the compound in the context of human diseases.", "title": "" }, { "docid": "d1ef00d0860b0cab22280415c17430cb", "text": "The FreeBSD project has been engaged in ongoing work to provide scalable support for multi-processor computer systems since version 5. Sufficient progress has been made that the C library’s malloc(3) memory allocator is now a potential bottleneck for multi-threaded applications running on multiprocessor systems. In this paper, I present a new memory allocator that builds on the state of the art to provide scalable concurrent allocation for applications. Benchmarks indicate that with this allocator, memory allocation for multi-threaded applications scales well as the number of processors increases. At the same time, single-threaded allocation performance is similar to the previous allocator implementation.", "title": "" }, { "docid": "7c9a19d34140618ed5958ffa65b1c046", "text": "A publish/subscribe system dynamically routes and delivers events from sources to interested users, and is an extremely useful communication service when it is not clear in advance who needs what information. In this paper we discuss how a publish/subscribe system can be extended to operate in a mobile environment, where events can be generated by moving sensors or users, and subscribers can request delivery at handheld and/or mobile devices. We describe how the publish/subscribe system itself can be distributed across multiple (possibly mobile) computers to distribute load, and how the system can be replicated to cope with failures, message loss, and disconnections.", "title": "" }, { "docid": "056ff888208f16c18c1da36c22724e0f", "text": "Phone tokenization followed by n-gram language modeling has consistently provided good results for the task of language identification. In this paper, this technique is generalized by using Gaussian mixture models as the basis for tokenizing. Performance results are presented for a system employing a GMM tokenizer in conjunction with multiple language processing and score combination techniques. On the 1996 CallFriend LID evaluation set, a 12-way closed set error rate of 17% was obtained.", "title": "" }, { "docid": "53c2835a45ff743633f9d08867ca3f06", "text": "This paper presents a mathematical model and vertical flight control algorithms for a new tilt-wing unmanned aerial vehicle (UAV). The vehicle is capable of vertical take-off and landing (VTOL). Due to its tilt-wing structure, it can also fly horizontally. The mathematical model of the vehicle is obtained using Newton-Euler formulation. A gravity compensated PID controller is designed for altitude control, and three PID controllers are designed for attitude stabilization of the vehicle. Performances of these controllers are found to be quite satisfactory as demonstrated by indoor and outdoor flight experiments.", "title": "" }, { "docid": "d553741de150fd90c08ccd072b1d2634", "text": "Mass spectrometry, had and still has, a very important role for research and quality control in the viticulture and enology field, and its analytical power is relevant for structural studies on aroma and polyphenolic compounds. Polyphenols are responsible for the taste and color of wine, and confer astringency and structure to the beverage. The knowledge of the anthocyanic structure is very important to predict the aging attitude of wine, and to attempt to resolve problems about color stability. Moreover, polyphenols are the main compounds related to the benefits of wine consumption in the diet, because of their properties in the treatment of circulatory disorders such as capillary fragility, peripheral chronic venous insufficiency, and microangiopathy of the retina. Liquid Chromatography-Mass Spectrometry (LC-MS) techniques are nowadays the best analytical approach to study polyphenols in grape extracts and wine, and are the most effective tool in the study of the structure of anthocyanins. The MS/MS approach is a very powerful tool that permits anthocyanin aglycone and sugar moiety characterization. LC-MS allows the characterization of complex structures of grape polyphenols, such as procyanidins, proanthocyanidins, prodelphinidins, and tannins, and provides experimental evidence for structures that were previously only hypothesized. The matrix-assisted-laser-desorption-ionization-time-of-flight (MALDI-TOF) technique is suitable to determine the presence of molecules of higher molecular weight with high accuracy, and it has been applied with success to study procyanidin oligomers up to heptamers in the reflectron mode, and up to nonamers in the linear mode. The levels of resveratrol in wine, an important polyphenol well-known for its beneficial effects, have been determined by SPME and LC-MS, and the former approach led to the best results in terms of sensitivity.", "title": "" }, { "docid": "8a4a38a1a3fd30ed884b788290b9dc77", "text": "The performance of an engineered ecosystem constructed and operated by the BioProcess research group of Rio de Janeiro State University-UERJ to treat the sewage of a research campus was evaluated on the island of Ilha Grande, RJ, Brazil. The engineered ecosystem was created as a sustainable alternative for decentralized sewage treatment in rural areas and consists of conventional treatment units as well as vegetated and algae tanks. The main objective of the study was to analyze the performance of each specific tank, as well as the system overall, pollutant removal performance. A method of sampling according to the hydraulic retention time of each treatment unit was used, in order to gain better understanding of the complex processes that contributes to pollutant removal. Four series of sampling were conducted in a total of 9 sampling points, including the raw affluent and the effluents from all treatment units. The concentration of most parameters in the final effluent were below discharge limits set by Brazilian and Swedish regulations, with satisfactory removal for most parameters with the exception of total nitrogen and total phosphorus, which were just above the Swedish limits. One important observation, which was possible due to the sampling strategy, was the considerable variation in each treatment unit's performance among series. However, when comparing the system overall performance, removal rates among series were stable, indicating buffering capacity of the overall system and a cooperative nature between the different tanks. Furthermore the poor performance of the first of the four conducted sample series was striking and was probably caused by initially weakened bacteria cultures. Additionally, algal bloom was experienced in the vegetated and algae tanks, which is suspected to have impacted system performance, particularly in the form of enhanced phosphorus removal. In conclusion, the engineered ecosystem is considered to be a viable alternative to on-site sewage treatment in rural areas with tropical climates. However, some improvements are required specially to achieve higher phosphorus removal.", "title": "" }, { "docid": "0918688b8d8fccc3d98ae790d42b3e01", "text": "Structure-from-Motion for unordered image collections has significantly advanced in scale over the last decade. This impressive progress can be in part attributed to the introduction of efficient retrieval methods for those systems. While this boosts scalability, it also limits the amount of detail that the large-scale reconstruction systems are able to produce. In this paper, we propose a joint reconstruction and retrieval system that maintains the scalability of large-scale Structure-from-Motion systems while also recovering the often lost ability of reconstructing fine details of the scene. We demonstrate our proposed method on a large-scale dataset of 7.4 million images downloaded from the Internet.", "title": "" }, { "docid": "ab8599cbe4b906cea6afab663cbe2caf", "text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.", "title": "" }, { "docid": "b0148c89aad25a8d14c099713a18eab6", "text": "New algorithms for computing the Discrete Fourier Transform of n points are described. For n in the range of a few tens to a few thousands these algorithms use substantially fewer multiplications than the best algorithm previously known, and about the same number of additions.", "title": "" }, { "docid": "d8aae877405d95d592b7460bb10d8ebd", "text": "People sometimes choose word-like abbreviations to refer to items with a long description. These abbreviations usually come from the descriptive text of the item and are easy to remember and pronounce, while preserving the key idea of the item. Coming up with a nice abbreviation is not an easy job, even for human. Previous assistant naming systems compose names by applying hand-written rules, which may not perform well. In this paper, we propose to view the naming task as an artificial intelligence problem and create a data set in the domain of academic naming. To generate more delicate names, we propose a three-step framework, including description analysis, candidate generation and abbreviation ranking, each of which is parameterized and optimizable. We conduct experiments to compare different settings of our framework with several analysis approaches from different perspectives. Compared to online or baseline systems, our framework could achieve the best results.", "title": "" }, { "docid": "5898a24a260d2c653c1ec7d798a1024c", "text": "In this paper we present results for two tasks: social event detection and social network extraction from a literary text, Alice in Wonderland. For the first task, our system trained on a news corpus using tree kernels and support vector machines beats the baseline systems by a statistically significant margin. Using this system we extract a social network from Alice in Wonderland. We show that while we achieve an F-measure of about 61% on social event detection, our extracted unweighted network is not statistically distinguishable from the un-weighted gold network according to popularly used network measures.", "title": "" }, { "docid": "b088438d5e44d9fc2bd4156dbb708b1a", "text": "Applying parallelism to constraint solving seems a promising approach and it has been done with varying degrees of success. Early attempts to parallelize constraint propagation, which constitutes the core of traditional interleaved propagation and search constraint solving, were hindered by its essentially sequential nature. Recently, parallelization efforts have focussed mainly on the search part of constraint solving, as well as on local-search based solving. Lately, a particular source of parallelism has become pervasive, in the guise of GPUs, able to run thousands of parallel threads, and they have naturally drawn the attention of researchers in parallel constraint solving. In this paper, we address challenges faced when using multiple devices for constraint solving, especially GPUs, such as deciding on the appropriate level of parallelism to employ, load balancing and inter-device communication, and present our current solutions.", "title": "" }, { "docid": "ec6fd0bc7f59bdf865b4383a247b984f", "text": "This paper proposes a novel technique to forecast day-ahead electricity prices based on the wavelet transform and ARIMA models. The historical and usually ill-behaved price series is decomposed using the wavelet transform in a set of better-behaved constitutive series. Then, the future values of these constitutive series are forecast using properly fitted ARIMA models. In turn, the ARIMA forecasts allow, through the inverse wavelet transform, reconstructing the future behavior of the price series and therefore to forecast prices. Results from the electricity market of mainland Spain in year 2002 are reported.", "title": "" }, { "docid": "e99369633599d38d84ad1a5c74695475", "text": "Sarcasm is a form of language in which individual convey their message in an implicit way i.e. the opposite of what is implied. Sarcasm detection is the task of predicting sarcasm in text. This is the crucial step in sentiment analysis due to inherently ambiguous nature of sarcasm. With this ambiguity, sarcasm detection has always been a difficult task, even for humans. Therefore sarcasm detection has gained importance in many Natural Language Processing applications. In this paper, we describe approaches, issues, challenges and future scopes in sarcasm detection.", "title": "" }, { "docid": "172561db4f6d4bfe2b15c8d26adc3d91", "text": "\"Big Data\" in map-reduce (M-R) clusters is often fundamentally temporal in nature, as are many analytics tasks over such data. For instance, display advertising uses Behavioral Targeting (BT) to select ads for users based on prior searches, page views, etc. Previous work on BT has focused on techniques that scale well for offline data using M-R. However, this approach has limitations for BT-style applications that deal with temporal data: (1) many queries are temporal and not easily expressible in M-R, and moreover, the set-oriented nature of M-R front-ends such as SCOPE is not suitable for temporal processing, (2) as commercial systems mature, they may need to also directly analyze and react to real-time data feeds since a high turnaround time can result in missed opportunities, but it is difficult for current solutions to naturally also operate over real-time streams. Our contributions are twofold. First, we propose a novel framework called TiMR (pronounced timer), that combines a time-oriented data processing system with a M-R framework. Users write and submit analysis algorithms as temporal queries - these queries are succinct, scale-out-agnostic, and easy to write. They scale well on large-scale offline data using TiMR, and can work unmodified over real-time streams. We also propose new cost-based query fragmentation and temporal partitioning schemes for improving efficiency with TiMR. Second, we show the feasibility of this approach for BT, with new temporal algorithms that exploit new targeting opportunities. Experiments using real data from a commercial ad platform show that TiMR is very efficient and incurs orders-of-magnitude lower development effort. Our BT solution is easy and succinct, and performs up to several times better than current schemes in terms of memory, learning time, and click-through-rate/coverage.", "title": "" }, { "docid": "9859df7dbe200d09af3b598608905314", "text": "Split-merge moves are a standard component of MCMC algorithms for tasks such as multitarget tracking and fitting mixture models with unknown numbers of components. Achieving rapid mixing for split-merge MCMC has been notoriously difficult, and state-of-the-art methods do not scale well. We explore the reasons for this and propose a new split-merge kernel consisting of two sub-kernels: one combines a “smart” split move that proposes plausible splits of heterogeneous clusters with a “dumb” merge move that proposes merging random pairs of clusters; the other combines a dumb split move with a smart merge move. We show that the resulting smart-dumb/dumb-smart (SDDS) algorithm outperforms previous methods. Experiments with entity-mention models and Dirichlet process mixture models demonstrate much faster convergence and better scaling to large data sets.", "title": "" }, { "docid": "15f8f9a6a6ec038a9b48fcc30f39ad4e", "text": "The macrophage mannose receptor (MR, CD206) is a C-type lectin expressed predominantly by most tissue macrophages, dendritic cells and specific lymphatic or endothelial cells. It functions in endocytosis and phagocytosis, and plays an important role in immune homeostasis by scavenging unwanted mannoglycoproteins. More attention is being paid to its particularly high expression in tissue pathology sites during disease such the tumor microenvironment. The MR recognizes a variety of microorganisms by their mannan-coated cell wall, which is exploited by adapted intracellular pathogens such as Mycobacterium tuberculosis, for their own survival. Despite the continued development of drug delivery technologies, the targeting of agents to immune cells, especially macrophages, for effective diagnosis and treatment of chronic infectious diseases has not been addressed adequately. In this regard, strategies that optimize MR-mediated uptake by macrophages in target tissues during infection are becoming an attractive approach. We review important progress in this area.", "title": "" } ]
scidocsrr
95f4a23a2a9c7bbb080c8d55ccbaabe5
ChemNet: A Transferable and Generalizable Deep Neural Network for Small-Molecule Property Prediction
[ { "docid": "55a0fb2814fde7890724a137fc414c88", "text": "Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.", "title": "" }, { "docid": "921251c0a45ee62af3d35d718d5cb09b", "text": "Deep convolutional neural networks comprise a subclass of deep neural networks (DNN) with a constrained architecture that leverages the spatial and temporal structure of the domain they model. Convolutional networks achieve the best predictive performance in areas such as speech and image recognition by hierarchically composing simple local features into complex models. Although DNNs have been used in drug discovery for QSAR and ligand-based bioactivity predictions, none of these models have benefited from this powerful convolutional architecture. This paper introduces AtomNet, the first structure-based, deep convolutional neural network designed to predict the bioactivity of small molecules for drug discovery applications. We demonstrate how to apply the convolutional concepts of feature locality and hierarchical composition to the modeling of bioactivity and chemical interactions. In further contrast to existing DNN techniques, we show that AtomNet’s application of local convolutional filters to structural target information successfully predicts new active molecules for targets with no previously known modulators. Finally, we show that AtomNet outperforms previous docking approaches on a diverse set of benchmarks by a large margin, achieving an AUC greater than 0.9 on 57.8% of the targets in the DUDE benchmark.", "title": "" }, { "docid": "349b0d539e560f1b4925fa0a4914fd13", "text": "There are two major challenges to overcome when developing a classifier to perform automatic disease diagnosis. First, the amount of labeled medical data is typically very limited, and a classifier cannot be effectively trained to attain high disease-detection accuracy. Second, medical domain knowledge is required to identify representative features in data for detecting a target disease. Most computer scientists and statisticians do not have such domain knowledge. In this work, we show that employing transfer learning can remedy both problems. We use Otitis Media (OM) to conduct our case study. Instead of using domain knowledge to extract features from labeled OM images, we construct features based on a dataset entirely OM-irrelevant. More specifically, we first learn a codebook in an unsupervised way from 15 million images collected from ImageNet. The codebook gives us what the encoders consider being the fundamental elements of those 15 million images. We then encode OM images using the codebook and obtain a weighting vector for each OM image. Using the resulting weighting vectors as the feature vectors of the OM images, we employ a traditional supervised learning algorithm to train an OM classifier. The achieved detection accuracy is 88.5% (89.63% in sensitivity and 86.9% in specificity), markedly higher than all previous attempts, which relied on domain experts to help extract features.", "title": "" } ]
[ { "docid": "c87487289136493c3418fd39bf9fb0b3", "text": "Inductive power transfer (IPT) systems for transmitting tens to hundreds of watts have been reported for almost a decade. Most of the work has concentrated on the optimization of the link efficiency and has not taken into account the efficiency of the driver. Class-E amplifiers have been identified as ideal drivers for IPT applications, but their power handling capability at tens of megahertz has been a crucial limiting factor, since the load and inductor characteristics are set by the requirements of the resonant inductive system. The frequency limitation of the driver restricts the unloaded Q-factor of the coils and thus the link efficiency. With a suitable driver, copper coil unloaded Q factors of over 1000 can be achieved in the low megahertz region, enabling a cost-effective high Q coil assembly. The system presented in this paper alleviates the use of heavy and expensive field-shaping techniques by presenting an efficient IPT system capable of transmitting energy with a dc-to-load efficiency above 77% at 6 MHz across a distance of 30 cm. To the authors knowledge, this is the highest dc-to-load efficiency achieved for an IPT system without introducing restrictive coupling factor enhancement techniques.", "title": "" }, { "docid": "ed5bdeb59b337c167c734a56c038eaeb", "text": "This paper presents a millimeter-wave (mmW) frequency generation stage aimed at minimizing phase noise (PN) via waveform shaping and harmonic extraction while suppressing flicker noise upconversion via proper harmonic terminations. A 2nd-harmonic resonance is assisted by a proposed embedded decoupling capacitor inside a transformer for explicit common-mode current return path. Class-F operation with 3rd-harmonic boosting and extraction techniques allow maintaining high quality factor of a 10-GHz tank at the 30-GHz frequency generation. We further propose a comprehensive quantitative analysis method of flicker noise upconversion mechanism exploiting latest insights into the flicker noise mechanisms in nanoscale short-channel transistors, and it is numerically verified against foundry models. The proposed 27.3- to 31.2-GHz oscillator is implemented in TSMC 28-nm CMOS. It achieves PN of −106 dBc/Hz at 1-MHz offset and figure-of-merit (FoM) of −184 dBc/Hz at 27.3 GHz. Its flicker phase-noise ( $1/f^{3}$ ) corner of 120 kHz is an order-of-magnitude better than currently achievable at mmW.", "title": "" }, { "docid": "f49b1ebcdc85fa747fd068913cddefcd", "text": "We have developed a novel highly articulated robotic probe (HARP) that can thread through tightly packed volumes without disturbing the surrounding tissues and organs. We use cardiac surgery as the focal application of this work. As such, we have designed the HARP to enter the pericardial cavity through a subxiphoid port. The surgeon can effectively reach remote intrapericardial locations on the epicardium and deliver therapeutic interventions under direct control. Reducing the overall cross-sectional diameter of the mechanism was the main challenge in the design of this device. Our device differs from others in that we use conventional actuation and still have good maneuverability. We have performed simple proof-of-concept clinical experiments to give preliminary validation of the ideas presented here", "title": "" }, { "docid": "aaf9884ef7f4611279f30ce01f84e48c", "text": "Nowadays, patients have a wealth of information available on the Internet. Despite the potential benefits of Internet health information seeking, several concerns have been raised about the quality of information and about the patient's capability to evaluate medical information and to relate it to their own disease and treatment. As such, novel tools are required to effectively guide patients and provide high-quality medical information in an intelligent and personalised manner. With this aim, this paper presents the Personal Health Information Recommender (PHIR), a system to empower patients by enabling them to search in a high-quality document repository selected by experts, avoiding the information overload of the Internet. In addition, the information provided to the patients is personalised, based on individual preferences, medical conditions and other profiling information. Despite the generality of our approach, we apply the PHIR to a personal health record system constructed for cancer patients and we report on the design, the implementation and a preliminary validation of the platform. To the best of our knowledge, our platform is the only one combining natural language processing, ontologies and personal information to offer a unique user experience.", "title": "" }, { "docid": "f75a1e5c9268a3a64daa94bb9c7f522d", "text": "Many natural language generation tasks, such as abstractive summarization and text simplification, are paraphrase-orientated. In these tasks, copying and rewriting are two main writing modes. Most previous sequence-to-sequence (Seq2Seq) models use a single decoder and neglect this fact. In this paper, we develop a novel Seq2Seq model to fuse a copying decoder and a restricted generative decoder. The copying decoder finds the position to be copied based on a typical attention model. The generative decoder produces words limited in the source-specific vocabulary. To combine the two decoders and determine the final output, we develop a predictor to predict the mode of copying or rewriting. This predictor can be guided by the actual writing mode in the training data. We conduct extensive experiments on two different paraphrase datasets. The result shows that our model outperforms the stateof-the-art approaches in terms of both informativeness and language quality.", "title": "" }, { "docid": "755820a345dea56c4631ee14467e2e41", "text": "This paper presents a novel six-axis force/torque (F/T) sensor for robotic applications that is self-contained, rugged, and inexpensive. Six capacitive sensor cells are adopted to detect three normal and three shear forces. Six sensor cell readings are converted to F/T information via calibrations and transformation. To simplify the manufacturing processes, a sensor design with parallel and orthogonal arrangements of sensing cells is proposed, which achieves the large improvement of the sensitivity. Also, the signal processing is realized with a single printed circuit board and a ground plate, and thus, we make it possible to build a lightweight six-axis F/T sensor with simple manufacturing processes at extremely low cost. The sensor is manufactured and its performances are validated by comparing them with a commercial six-axis F/T sensor.", "title": "" }, { "docid": "ae10f7f8a8bf73e606355159cf71c91c", "text": "We propose a practical and scalable technique for point-to-point routing in wireless sensornets. This method, called Beacon Vector Routing (BVR), assigns coordinates to nodes based on the vector of hop count distances to a small set of beacons, and then defines a distance metric on these coordinates. BVR routes packets greedily, forwarding to the next hop that is the closest (according to this beacon vector distance metric) to the destination. We evaluate this approach through a combination of high-level simulation to investigate scaling and design tradeoffs, and a prototype implementation over real testbeds as a necessary reality check.", "title": "" }, { "docid": "0b28e0e8637a666d616a8c360d411193", "text": "As a novel dynamic network service infrastructure, Internet of Things (IoT) has gained remarkable popularity with obvious superiorities in the interoperability and real-time communication. Despite of the convenience in collecting information to provide the decision basis for the users, the vulnerability of embedded sensor nodes in multimedia devices makes the malware propagation a growing serious problem, which would harm the security of devices and their users financially and physically in wireless multimedia system (WMS). Therefore, many researches related to the malware propagation and suppression have been proposed to protect the topology and system security of wireless multimedia network. In these studies, the epidemic model is of great significance to the analysis of malware propagation. Considering the cloud and state transition of sensor nodes, a cloud-assisted model for malware detection and the dynamic differential game against malware propagation are proposed in this paper. Firstly, a SVM based malware detection model is constructed with the data sharing at the security platform in the cloud. Then the number of malware-infected nodes with physical infectivity to susceptible nodes is calculated precisely based on the attributes of WMS transmission. Then the state transition among WMS devices is defined by the modified epidemic model. Furthermore, a dynamic differential game and target cost function are successively derived for the Nash equilibrium between malware and WMS system. On this basis, a saddle-point malware detection and suppression algorithm is presented depending on the modified epidemic model and the computation of optimal strategies. Numerical results and comparisons show that the proposed algorithm can increase the utility of WMS efficiently and effectively.", "title": "" }, { "docid": "107b95c3bb00c918c73d82dd678e46c0", "text": "Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).", "title": "" }, { "docid": "f5432da5e2cfdca1fb6bba71a0fb4756", "text": "Despite the success of image segmentation, convolutional neural networks are ill-equipped for incremental learning, i.e., adapting the original model trained on a set of classes to additionally segment new classes, without access of the original training data. They suffer from “catastrophic forgetting” — an abrupt degradation of performance on the old classes, when the training objective is adapted to the new classes. We present a method to address this issue, and learn image segmentation incrementally on private data whose annotations for the original classes in the new training set are unavailable. The key of our proposed solution is to balance the interplay between predictions on the new classes and distillation loss, it minimizes the discrepancy between responses for old classes on updated network via knowledge rehearsal. This incremental learning can be performed multiple times, for a new set of classes in each step, with a moderate drop in performance compared to the baseline network trained on the ensemble of data. We present image segmentation results on the PASCAL VOC 2012 and COCO datasets, on the ResNet and DenseNet architecture, along with a detailed empirical analysis of the approach.", "title": "" }, { "docid": "0da5045988b5064544870e1ff0f7ba44", "text": "Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.", "title": "" }, { "docid": "90489f48161a13734cb91da56d4fad87", "text": "Given that the neural and connective tissues of the optic nerve head (ONH) exhibit complex morphological changes with the development and progression of glaucoma, their simultaneous isolation from optical coherence tomography (OCT) images may be of great interest for the clinical diagnosis and management of this pathology. A deep learning algorithm was designed and trained to digitally stain (i.e. highlight) 6 ONH tissue layers by capturing both the local (tissue texture) and contextual information (spatial arrangement of tissues). The overall dice coefficient (mean of all tissues) was 0.91 ± 0.05 when assessed against manual segmentations performed by an expert observer. We offer here a robust segmentation framework that could be extended for the automated parametric study of the ONH tissues.", "title": "" }, { "docid": "35502104f98e7ced7c39d622ed7a82ea", "text": "When security incidents occur, several challenges exist for conducting an effective forensic investigation of SCADA systems, which run 24/7 to control and monitor industrial and infrastructure processes. The Web extra at http://youtu.be/L0EFnr-famg is an audio interview with Irfan Ahmed about SCADA (supervisory control and data acquisition) systems.", "title": "" }, { "docid": "077162116799dffe986cb488dda2ee56", "text": "We present hybrid concolic testing, an algorithm that interleaves random testing with concolic execution to obtain both a deep and a wide exploration of program state space. Our algorithm generates test inputs automatically by interleaving random testing until saturation with bounded exhaustive symbolic exploration of program points. It thus combines the ability of random search to reach deep program states quickly together with the ability of concolic testing to explore states in a neighborhood exhaustively. We have implemented our algorithm on top of CUTE and applied it to obtain better branch coverage for an editor implementation (VIM 5.7, 150K lines of code) as well as a data structure implementation in C. Our experiments suggest that hybrid concolic testing can handle large programs and provide, for the same testing budget, almost 4× the branch coverage than random testing and almost 2× that of concolic testing.", "title": "" }, { "docid": "c42b89a03ac02256a05d5c88cb77d02f", "text": "The work of the Institute of Medicine and others has clearly demonstrated that when healthcare professionals understand each others' roles and are able to communicate and work effectively together, patients are more likely to receive safe, quality care. Currently, there are few opportunities to bring faculty and students in pre-licensure programs from multiple disciplines together for the purpose of learning together about each others' roles, and practicing collaboration and teamwork. Designing and implementing interprofessional education offerings is challenging. Course scheduling, faculty interest and expertise in interprofessional education (IPE), a culture of IPE among faculty and students, and institutional policies for sharing course credit among schools are just a few of the challenges. This article explores the concept of IPE, and how faculty in schools of nursing might take the lead to work with colleagues in other health profession schools to prepare graduates to understand each others' roles, and the importance of teamwork, communication, and collaboration to the delivery of high quality, safe patient care.", "title": "" }, { "docid": "5ffb32b43a1c89a808ff257cd524a15e", "text": "This paper presents a novel hierarchical approach for the simultaneous tracking of multiple targets in a video. We use a network flow approach to link detections in low-level and tracklets in high-level. At each step of the hierarchy, the confidence of candidates is measured by using a new scoring system, ConfRank, that considers the quality and the quantity of its neighborhood. The output of the first stage is a collection of safe tracklets and unlinked high-confidence detections. For each individual detection, we determine if it belongs to an existing or is a new tracklet. We show the effect of our framework to recover missed detections and reduce switch identity. The proposed tracker is referred to as TVOD for multi-target tracking using the visual tracker and generic object detector. We achieve competitive results with lower identity switches on several datasets comparing to state-of-the-art.", "title": "" }, { "docid": "0ea07af19fc199f6a9909bd7df0576a1", "text": "Detection of overlapping communities in complex networks has motivated recent research in the relevant fields. Aiming this problem, we propose a Markov dynamics based algorithm, called UEOC, which means, “unfold and extract overlapping communities”. In UEOC, when identifying each natural community that overlaps, a Markov random walk method combined with a constraint strategy, which is based on the corresponding annealed network (degree conserving random network), is performed to unfold the community. Then, a cutoff criterion with the aid of a local community function, called conductance, which can be thought of as the ratio between the number of edges inside the community and those leaving it, is presented to extract this emerged community from the entire network. The UEOC algorithm depends on only one parameter whose value can be easily set, and it requires no prior knowledge on the hidden community structures. The proposed UEOC has been evaluated both on synthetic benchmarks and on some real-world networks, and was compared with a set of competing algorithms. Experimental result has shown that UEOC is highly effective and efficient for discovering overlapping communities.", "title": "" }, { "docid": "e5a2c2ef9d2cb6376b18c1e7232016b2", "text": "In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera. We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results.", "title": "" }, { "docid": "8a6a5f02a399865afbbad607fd720d00", "text": "Estimating entropy and mutual information consistently is important for many machine learning applications. The Kozachenko-Leonenko (KL) estimator ( Kozachenko & Leonenko , 1987) is a widely used nonparametric estimator for the entropy of multivariate continuous random variables, as well as the basis of the mutual information estimator ofKraskov et al.(2004), perhaps the most widely used estimator of mutual information in this setting. Despite the practical importance of these estimators, major theoretical questions regarding their finite-sample behavior remain open. This paper proves finite-sample bounds on the bias and variance of the KL estimator, showing that it achieves the minimax convergence rate for certain classes of smooth functions. In proving these bounds, we analyze finitesample behavior of k-nearest neighbors ( k-NN) distance statistics (on which the KL estimator is based). We derive concentration inequalities for k-NN distances and a general expectation bound for statistics ofk-NN distances, which may be useful for other analyses of k-NN methods.", "title": "" }, { "docid": "41cfa1840ef8b6f35865b220c087302b", "text": "Ultra-high voltage (>10 kV) power devices based on SiC are gaining significant attentions since Si power devices are typically at lower voltage levels. In this paper, a world record 22kV Silicon Carbide (SiC) p-type ETO thyristor is developed and reported as a promising candidate for ultra-high voltage applications. The device is based on a 2cm2 22kV p type gate turn off thyristor (p-GTO) structure. Its static as well as dynamic performances are analyzed, including the anode to cathode blocking characteristics, forward conduction characteristics at different temperatures, turn-on and turn-off dynamic performances. The turn-off energy at 6kV, 7kV and 8kV respectively is also presented. In addition, theoretical boundary of the reverse biased safe operation area (RBSOA) of the 22kV SiC ETO is obtained by simulations and the experimental test also demonstrated a wide RBSOA.", "title": "" } ]
scidocsrr
17222fa33deec696321f109cc80a573f
Increasing accuracy of traffic light color detection and recognition using machine learning
[ { "docid": "425fb8419a81531e9f5ce3da96155d93", "text": "This paper presents the challenges that researchers must overcome in traffic light recognition (TLR) research and provides an overview of ongoing work. The aim is to elucidate which areas have been thoroughly researched and which have not, thereby uncovering opportunities for further improvement. An overview of the applied methods and noteworthy contributions from a wide range of recent papers is presented, along with the corresponding evaluation results. The evaluation of TLR systems is studied and discussed in depth, and we propose a common evaluation procedure, which will strengthen evaluation and ease comparison. To provide a shared basis for comparing TLR systems, we publish an extensive public data set based on footage from U.S. roads. The data set contains annotated video sequences, captured under varying light and weather conditions using a stereo camera. The data set, with its variety, size, and continuous sequences, should challenge current and future TLR systems.", "title": "" } ]
[ { "docid": "34f0a6e303055fc9cdefa52645c27ed5", "text": "Purpose – The purpose of this paper is to identify the factors that influence people to play socially interactive games on mobile devices. Based on network externalities and theory of uses and gratifications (U&G), it seeks to provide direction for further academic research on this timely topic. Design/methodology/approach – Based on 237 valid responses collected from online questionnaires, structural equation modeling technology was employed to examine the research model. Findings – The results reveal that both network externalities and individual gratifications significantly influence the intention to play social games on mobile devices. Time flexibility, however, which is one of the mobile device features, appears to contribute relatively little to the intention to play mobile social games. Originality/value – This research successfully applies a combination of network externalities theory and U&G theory to investigate the antecedents of players’ intentions to play mobile social games. This study is able to provide a better understanding of how two dimensions – perceived number of users/peers and individual gratification – influence mobile game playing, an insight that has not been examined previously in the mobile apps literature.", "title": "" }, { "docid": "7d61a2bedb128d77a81c6b4958a17c30", "text": "Levering data on social media, such as Twitter and Facebook, requires information retrieval algorithms to become able to relate very short text fragments to each other. Traditional text similarity methods such as tf-idf cosine-similarity, based on word overlap, mostly fail to produce good results in this case, since word overlap is little or non-existent. Recently, distributed word representations, or word embeddings, have been shown to successfully allow words to match on the semantic level. In order to pair short text fragments -- as a concatenation of separate words -- an adequate distributed sentence representation is needed, in existing literature often obtained by naively combining the individual word representations. We therefore investigated several text representations as a combination of word embeddings in the context of semantic pair matching. This paper investigates the effectiveness of several such naive techniques, as well as traditional tf-idf similarity, for fragments of different lengths. Our main contribution is a first step towards a hybrid method that combines the strength of dense distributed representations -- as opposed to sparse term matching -- with the strength of tf-idf based methods to automatically reduce the impact of less informative terms. Our new approach outperforms the existing techniques in a toy experimental set-up, leading to the conclusion that the combination of word embeddings and tf-idf information might lead to a better model for semantic content within very short text fragments.", "title": "" }, { "docid": "0b6b8474d7f71412e1e5a40f24c7d7d7", "text": "Practical work on spoken language translation must pursue two types of efficiency: computational efficiency, and “language engineering” efficiency. This paper describes the design, implementation, and evaluation of the GPL-based framework for spoken language translation that addresses both of these goals. In this framework, computational grammars are written in GPL, an easy-to-use imperative programming language that allows the direct expression of linguistic algorithms in terms of rewrite-grammars with feature structure tests and manipulations. Computational efficiency is achieved with the GPL compiler, which converts GPL grammars into efficient C routines, and with the GPL runtime environment, which provides services for linguistic representations, manipulation, and memory management. An evaluation of an English-Japanese spoken language translation system based on GPL shows that it is linguistically powerful, yet only requires reasonable computational resources.", "title": "" }, { "docid": "087936fd1f2ab9ebb793c720bb3c18d8", "text": "The research integrates the citizen participation literature with research on perceived control in an effort to further our understanding of psychological empowerment. Eleven indices of empowerment representing personality, cognitive, and motivational measures were identified to represent the construct. Three studies examined the relationship between empowerment and participation. The first study examined differences among groups identified by a laboratory manipulation as willing to participate in personally relevant or community relevant situations. Study II examined differences for groups defined by actual involvement in community activities and organizations. Study III replicated Study II with a different population. In each study, individuals reporting a greater amount of participation scored higher on indices of empowerment. Psychological empowerment could be described as the connection between a sense of personal competence, a desire for, and a willingness to take action in the public domain. Discriminant function analyses resulted in one significant dimension, identified as pyschological empowerment, that was positively correlated with leadership and negatively correlated with alienation.", "title": "" }, { "docid": "3106e93134e7000ab8cad6b9527b9360", "text": "Traditional GIS tools and systems are powerful for analyzing geographic information for various applications but they are not designed for processing dynamic streams of data. This paper presents a CyberGIS framework that can automatically synthesize multi-sourced data, such as social media and socioeconomic data, to track disaster events, to produce maps, and to perform spatial and statistical analysis for disaster management. Within our framework, Apache Hive, Hadoop, and Mahout are used as scalable distributed storage, computing environment and machine learning library to store, process and mine massive social media data. The proposed framework is capable of supporting big data analytics of multiple sources. A prototype is implemented and tested using the 2011 Hurricane Sandy as a case study.", "title": "" }, { "docid": "92daaebd657bda6ea340893d8608f459", "text": "Many crimes can happen every day in a major city, and figuring out which ones are committed by the same individual or group is an important and difficult data mining challenge. To do this, we propose a pattern detection algorithm called Series Finder, that grows a pattern of discovered crimes from within a database, starting from a “seed” of a few crimes. Series Finder incorporates both the common characteristics of all patterns and the unique aspects of each specific pattern. We compared Series Finder with classic clustering and classification models applied to crime analysis. It has promising results on a decade’s worth of crime pattern data from the Cambridge Police Department.", "title": "" }, { "docid": "c90f67d8aabc24faf9f4fb15a4cfd5a2", "text": "The Internet of Things (IoT) will connect not only computers and mobile devices, but it will also interconnect smart buildings, houses, and cities, as well as electrical grids, gas plants, and water networks, automobiles, airplanes, etc. IoT will lead to the development of a wide range of advanced information services that are pervasive, cost-effective, and can be accessed from anywhere and at any time. However, due to the exponential number of interconnected devices, cyber-security in the IoT is a major challenge. It heavily relies on the digital identity concept to build security mechanisms such as authentication and authorization. Current centralized identity management systems are built around third party identity providers, which raise privacy concerns and present a single point of failure. In addition, IoT unconventional characteristics such as scalability, heterogeneity and mobility require new identity management systems to operate in distributed and trustless environments, and uniquely identify a particular device based on its intrinsic digital properties and its relation to its human owner. In order to deal with these challenges, we present a Blockchain-based Identity Framework for IoT (BIFIT). We show how to apply our BIFIT to IoT smart homes to achieve identity self-management by end users. In the context of smart home, the framework autonomously extracts appliances signatures and creates blockchain-based identifies for their appliance owners. It also correlates appliances signatures (low level identities) and owners identifies in order to use them in authentication credentials and to make sure that any IoT entity is behaving normally.", "title": "" }, { "docid": "b9d558e9effc49d495d610f25eec2a42", "text": "The increasing complexity of engineering systems has sparked increasing interest in multdisciplinary optimization (MDO). This paper presents a survey of recent publications in the field of aerospace where interest in MDO has been particularly intense. The two main challenges of MDO are computational expense and organizational complexity. Accordingly the survey is focused on various ways different researchers use to deal with these challenges. The survey is organized by a breakdown of MDO into its conceptual components. Accordingly, the survey includes sections on Mathematical Modeling, Design-oriented Analysis, Approximation Concepts, Optimization Procedures, System Sensitivity, and Human Interface. With the authors’ main expertise being in the structures area, the bulk of the references focus on the interaction of the structures discipline with other disciplines. In particular, two sections at the end focus on two such interactions that have recently been pursued with a particular vigor: Simultaneous Optimization of Structures and Aerodynamics, and Simultaneous Optimization of Structures Combined With Active Control.", "title": "" }, { "docid": "1c1d8901dea3474d1a6ecf84a2044bd4", "text": "Zero-shot learning (ZSL) is typically achieved by resorting to a class semantic embedding space to transfer the knowledge from the seen classes to unseen ones. Capturing the common semantic characteristics between the visual modality and the class semantic modality (e.g., attributes or word vector) is a key to the success of ZSL. In this paper, we propose a novel encoder-decoder approach, namely latent space encoding (LSE), to connect the semantic relations of different modalities. Instead of requiring a projection function to transfer information across different modalities like most previous work, LSE performs the interactions of different modalities via a feature aware latent space, which is learned in an implicit way. Specifically, different modalities are modeled separately but optimized jointly. For each modality, an encoder-decoder framework is performed to learn a feature aware latent space via jointly maximizing the recoverability of the original space from the latent space and the predictability of the latent space from the original space. To relate different modalities together, their features referring to the same concept are enforced to share the same latent codings. In this way, the common semantic characteristics of different modalities are generalized with the latent representations. Another property of the proposed approach is that it is easily extended to more modalities. Extensive experimental results on four benchmark datasets [animal with attribute, Caltech UCSD birds, aPY, and ImageNet] clearly demonstrate the superiority of the proposed approach on several ZSL tasks, including traditional ZSL, generalized ZSL, and zero-shot retrieval.", "title": "" }, { "docid": "9ffdee7d929c8b5efb1baf0b2b46a7a4", "text": "Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to nontabular reinforcement learning. This pseudocount was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma’s Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting Bellemare et al.’s approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma’s Revenge.", "title": "" }, { "docid": "a1b24627f8ba518fa9285596cc931e32", "text": "[3] Rakesh Agrawal and Arun Swami. A one-pass space-efficient algorithm for finding quantiles. A one-pass algorithm for accurately estimating quantiles for disk-resident data. [8] Jürgen Beringer and Eyke Hüllermeier. An efficient algorithm for instance-based learning on data streams.", "title": "" }, { "docid": "77d2255e0a2d77ea8b2682937b73cc7d", "text": "Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several represen-", "title": "" }, { "docid": "5ca75490c015685a1fc670b2ee5103ff", "text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.", "title": "" }, { "docid": "4535a5961d6628f2f4bafb1d99821bbb", "text": "The prevalence of diabetes has dramatically increased worldwide due to the vast increase in the obesity rate. Diabetic nephropathy is one of the major complications of type 1 and type 2 diabetes and it is currently the leading cause of end-stage renal disease. Hyperglycemia is the driving force for the development of diabetic nephropathy. It is well known that hyperglycemia increases the production of free radicals resulting in oxidative stress. While increases in oxidative stress have been shown to contribute to the development and progression of diabetic nephropathy, the mechanisms by which this occurs are still being investigated. Historically, diabetes was not thought to be an immune disease; however, there is increasing evidence supporting a role for inflammation in type 1 and type 2 diabetes. Inflammatory cells, cytokines, and profibrotic growth factors including transforming growth factor-β (TGF-β), monocyte chemoattractant protein-1 (MCP-1), connective tissue growth factor (CTGF), tumor necrosis factor-α (TNF-α), interleukin-1 (IL-1), interleukin-6 (IL-6), interleukin-18 (IL-18), and cell adhesion molecules (CAMs) have all been implicated in the pathogenesis of diabetic nephropathy via increased vascular inflammation and fibrosis. The stimulus for the increase in inflammation in diabetes is still under investigation; however, reactive oxygen species are a primary candidate. Thus, targeting oxidative stress-inflammatory cytokine signaling could improve therapeutic options for diabetic nephropathy. The current review will focus on understanding the relationship between oxidative stress and inflammatory cytokines in diabetic nephropathy to help elucidate the question of which comes first in the progression of diabetic nephropathy, oxidative stress, or inflammation.", "title": "" }, { "docid": "676a91eee10de39ab11ea9c98b78ea0a", "text": "Advances in synthetic biology have enabled the engineering of cells with genetic circuits in order to program cells with new biological behavior, dynamic gene expression, and logic control. This cellular engineering progression offers an array of living sensors that can discriminate between cell states, produce a regulated dose of therapeutic biomolecules, and function in various delivery platforms. In this review, we highlight and summarize the tools and applications in bacterial and mammalian synthetic biology. The examples detailed in this review provide insight to further understand genetic circuits, how they are used to program cells with novel functions, and current methods to reliably interface this technology in vivo; thus paving the way for the design of promising novel therapeutic applications.", "title": "" }, { "docid": "16708c9e697dbd867aa81420bc669953", "text": "We propose a dynamic trust management protocol for Internet of Things (IoT) systems to deal with misbehaving nodes whose status or behavior may change dynamically. We consider an IoT system being deployed in a smart community where each node autonomously performs trust evaluation. We provide a formal treatment of the convergence, accuracy, and resilience properties of our dynamic trust management protocol and validate these desirable properties through simulation. We demonstrate the effectiveness of our dynamic trust management protocol with a trust-based service composition application in IoT environments. Our results indicate that trust-based service composition significantly outperforms non-trust-based service composition and approaches the maximum achievable performance based on ground truth status. Furthermore, our dynamic trust management protocol is capable of adaptively adjusting the best trust parameter setting in response to dynamically changing environments to maximize application performance.", "title": "" }, { "docid": "1768b368fe0bbd47fbb2dbae1f908b29", "text": "We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs. Our tasks are inspired by theory-of-mind experiments that examine whether children are able to reason about the beliefs of others, in particular when those beliefs differ from reality. We evaluate a number of recent neural models with memory augmentation. We find that all fail on our tasks, which require keeping track of inconsistent states of the world; moreover, the models’ accuracy decreases notably when random sentences are introduced to the tasks at test.1 1 Reasoning About Beliefs Possessing a capacity similar to human reasoning has been argued to be necessary for the success of artificial intelligence systems (e.g., Levesque et al., 2011). One well-studied domain that requires reasoning is question answering, where simply memorizing and looking up information is often not enough to correctly answer a question. For example, given the very simple scenario in Table 1, searching for the word “Mary” and returning a nearby word is not a correct strategy; instead, a model needs to recognize that Mary is currently at the second location (office and not the bathroom). Recent research has focused on developing neural models that succeed in such scenarios (Sukhbaatar et al., 2015; Henaff et al., 2017). As a benchmark to evaluate these models, Weston et al. (2016) released a dataset – Facebook bAbi – that provides a set of toy tasks, each examining a specific type of reasoning. For example, the scenario in Table 1 evaluates the capacity to reason using a single supporting fact. However, the bAbi tasks are already too simple for the current models. Only a few years after their release, existing 1 Code to generate dataset and replicate results is available at github.com/kayburns/tom-qa-dataset. models fail at only one or two (out of 20) tasks (Rae et al., 2016; Santoro et al., 2017). Moreover, all except two of the reasoning tasks in this dataset only require transitive inference (Lee et al., 2016). Mary went to the bathroom. John moved to the hallway. Mary travelled to the office. Where is Mary? A: office Table 1: A task from the bAbi dataset (Weston et al., 2016). People reason not just about their own observations and beliefs but also about others’ mental states (such as beliefs and intentions). The capacity to recognize that others can have mental states different than one’s own – theory of mind – marks an important milestone in the development of children and has been extensively studied by psychologists (for a review, see Flavell, 2004). Artificial intelligence (AI) systems will also require a similar reasoning capacity about mental states as they are expected to be able to interact with people (e.g., Chandrasekaran et al., 2017; Grant et al., 2017; Rabinowitz et al., 2018). However, the bAbi dataset does not include tasks that evaluate a model’s ability to reason about beliefs. Grant et al. (2017) created a bAbistyle dataset inspired by an influential experiment on the theory of mind called the Sally-Anne task (e.g. Baron-Cohen et al., 1985). Their goal was to examine whether the end-to-end memory network (Sukhbaatar et al., 2015) can answer questions such as “where does Sally think the milk is?” in situations that Sally’s belief about the location of milk does not match the reality. For example, Sally thinks that the milk is in the fridge but the milk is actually on the table. The dataset of Grant et al. (2017) provides a first step in designing benchmarks to evaluate the mental-state reasoning capacity of questionanswering models, but it is still limited in the types of reasoning it probes. For example, it ar X iv :1 80 8. 09 35 2v 1 [ cs .C L ] 2 8 A ug 2 01 8 only considered first-order beliefs (e.g., Sally’s belief about the location of milk). People also reason about second-order (and higher-order) beliefs (e.g., Anne’s belief about Sally’s belief about the location of the milk). More importantly, similarly to the bAbi dataset, success in each task is defined as correctly answering one question. This does not guarantee that a model has an understanding of the state of the world; in fact, even in developmental theory-of-mind experiments, children are asked a few questions (e.g., “where is milk really?”) to ensure that their correct answer reflects their understanding and is not simply due to chance. In this paper, we address these shortcomings by designing a new dataset that enables us to evaluate a model’s capacity to reason about different types of beliefs as well as whether it maintains a correct understanding of the world. To this end, we evaluate a number of different models that perform well on the bAbi tasks: the end-to-end memory network (Sukhbaatar et al., 2015), the multiple observer model (Grant et al., 2017), the recurrent entity network (Henaff et al., 2017), and RelationNetwork (Santoro et al., 2017). We find that none of these models succeed at our tasks, suggesting that they are not able to keep track of inconsistent states of the world, in particular when someone’s belief does not match the history or reality of a situation. 2 Theory of Mind Experiments Behavioral research shows that children gradually develop a theory of mind (for a review, see Gopnik and Astington, 1988). At the age of two, most children have an understanding of others’ desires and perceptions – if someone wants something, they will try to get it and if something is in their sight, they can see it. Children begin to understand others’ beliefs around the age of three, but this understanding is still limited. For example, they might not be able to reason that someone’s actions are a result of their beliefs. By the age of five, most children have a unified theory of mind and are able to represent and reason about others’ desires, perceptions, and beliefs. Developmental psychologists have designed various experimental paradigms to examine to what extent children are able to reason about others’ mental states. We use these experiments as guidelines for designing tasks to evaluate the reasoning capacity of question-answering models. We first explain these experiments. 2.1 The Sally-Anne Experiment The Sally-Anne false-belief experiment, proposed by Baron-Cohen et al. (1985), examines children’s ability to reason about others’ false beliefs, i.e., when someone’s belief does not match the reality. In this experiment, the participants observe two agents, Sally and Anne, with their containers, a basket and a box. After putting a marble in her basket, Sally leaves the room (and is not able to observe the events anymore). After Sally’s departure, Anne moves the marble to her box. Then, Sally returns to the room (see Figure 1). The participants are asked the following questions: • “Where will Sally look for her marble?”", "title": "" }, { "docid": "9ce232e2a49652ee7fbfe24c6913d52a", "text": "Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.", "title": "" }, { "docid": "92cecd8329343bc3a9b0e46e2185eb1c", "text": "The spondylo and spondylometaphyseal dysplasias (SMDs) are characterized by vertebral changes and metaphyseal abnormalities of the tubular bones, which produce a phenotypic spectrum of disorders from the mild autosomal-dominant brachyolmia to SMD Kozlowski to autosomal-dominant metatropic dysplasia. Investigations have recently drawn on the similar radiographic features of those conditions to define a new family of skeletal dysplasias caused by mutations in the transient receptor potential cation channel vanilloid 4 (TRPV4). This review demonstrates the significance of radiography in the discovery of a new bone dysplasia family due to mutations in a single gene.", "title": "" }, { "docid": "73dcb2e355679f2e466029fbbb24a726", "text": "Many of the world's most popular websites catalyze their growth through invitations from existing members. New members can then in turn issue invitations, and so on, creating cascades of member signups that can spread on a global scale. Although these diffusive invitation processes are critical to the popularity and growth of many websites, they have rarely been studied, and their properties remain elusive. For instance, it is not known how viral these cascades structures are, how cascades grow over time, or how diffusive growth affects the resulting distribution of member characteristics present on the site. In this paper, we study the diffusion of LinkedIn, an online professional network comprising over 332 million members, a large fraction of whom joined the site as part of a signup cascade. First we analyze the structural patterns of these signup cascades, and find them to be qualitatively different from previously studied information diffusion cascades. We also examine how signup cascades grow over time, and observe that diffusion via invitations on LinkedIn occurs over much longer timescales than are typically associated with other types of online diffusion. Finally, we connect the cascade structures with rich individual-level attribute data to investigate the interplay between the two. Using novel techniques to study the role of homophily in diffusion, we find striking differences between the local, edge-wise homophily and the global, cascade-level homophily we observe in our data, suggesting that signup cascades form surprisingly coherent groups of members.", "title": "" } ]
scidocsrr
cf97dbb648e0dae77fe6beda8c26924f
Predicting Ego-Vehicle Paths from Environmental Observations with a Deep Neural Network
[ { "docid": "2ecd0bf132b3b77dc1625ef8d09c925b", "text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.", "title": "" }, { "docid": "9bae1002ee5ebf0231fe687fd66b8bb5", "text": "We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.", "title": "" } ]
[ { "docid": "c120406dd4e60a9bb33dd4a87cbd3616", "text": "Intersubjectivity is an important concept in psychology and sociology. It refers to sharing conceptualizations through social interactions in a community and using such shared conceptualization as a resource to interpret things that happen in everyday life. In this work, we make use of intersubjectivity as the basis to model shared stance and subjectivity for sentiment analysis. We construct an intersubjectivity network which links review writers, terms they used, as well as the polarities of the terms. Based on this network model, we propose a method to learn writer embeddings which are subsequently incorporated into a convolutional neural network for sentiment analysis. Evaluations on the IMDB, Yelp 2013 and Yelp 2014 datasets show that the proposed approach has achieved the state-of-the-art performance.", "title": "" }, { "docid": "ce0b0543238a81c3f02c43e63a285605", "text": "Hatebusters is a web application for actively reporting YouTube hate speech, aiming to establish an online community of volunteer citizens. Hatebusters searches YouTube for videos with potentially hateful comments, scores their comments with a classifier trained on human-annotated data and presents users those comments with the highest probability of being hate speech. It also employs gamification elements, such as achievements and leaderboards, to drive user engagement.", "title": "" }, { "docid": "7f8ca7d8d2978bfc08ab259fba60148e", "text": "Over the last few years, much online volunteered geographic information (VGI) has emerged and has been increasingly analyzed to understand places and cities, as well as human mobility and activity. However, there are concerns about the quality and usability of such VGI. In this study, we demonstrate a complete process that comprises the collection, unification, classification and validation of a type of VGI—online point-of-interest (POI) data—and develop methods to utilize such POI data to estimate disaggregated land use (i.e., employment size by category) at a very high spatial resolution (census block level) using part of the Boston metropolitan area as an example. With recent advances in activity-based land use, transportation, and environment (LUTE) models, such disaggregated land use data become important to allow LUTE models to analyze and simulate a person’s choices of work location and activity destinations and to understand policy impacts on future cities. These data can also be used as alternatives to explore economic activities at the local level, especially as government-published census-based disaggregated employment data have become less available in the recent decade. Our new approach provides opportunities for cities to estimate land use at high resolution with low cost by utilizing VGI while ensuring its quality with a certain accuracy threshold. The automatic classification of POI can also be utilized for other types of analyses on cities. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0cfda368edafe21e538f2c1d7ed75056", "text": "This paper presents high performance speaker identification and verification systems based on Gaussian mixture speaker models: robust, statistically based representations of speaker identity. The identification system is a maximum likelihood classifier and the verification system is a likelihood ratio hypothesis tester using background speaker normalization. The systems are evaluated on four publically available speech databases: TIMIT, NTIMIT, Switchboard and YOHO. The different levels of degradations and variabilities found in these databases allow the examination of system performance for different task domains. Constraints on the speech range from vocabulary-dependent to extemporaneous and speech quality varies from near-ideal, clean speech to noisy, telephone speech. Closed set identification accuracies on the 630 speaker TIMIT and NTIMIT databases were 99.5% and 60.7%, respectively. On a 113 speaker population from the Switchboard database the identification accuracy was 82.8%. Global threshold equal error rates of 0.24%, 7.19%, 5.15% and 0.51% were obtained in verification experiments on the TIMIT, NTIMIT, Switchboard and YOHO databases, respectively.", "title": "" }, { "docid": "19350a76398e0054be44c73618cdfb33", "text": "An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.", "title": "" }, { "docid": "945ead15b96ed06a15b12372b4787fcf", "text": "We describe the development and testing of ab initio derived, AMBER ff03 compatible charge parameters for a large library of 147 noncanonical amino acids including β- and N-methylated amino acids for use in applications such as protein structure prediction and de novo protein design. The charge parameter derivation was performed using the RESP fitting approach. Studies were performed assessing the suitability of the derived charge parameters in discriminating the activity/inactivity between 63 analogs of the complement inhibitor Compstatin on the basis of previously published experimental IC50 data and a screening procedure involving short simulations and binding free energy calculations. We found that both the approximate binding affinity (K*) and the binding free energy calculated through MM-GBSA are capable of discriminating between active and inactive Compstatin analogs, with MM-GBSA performing significantly better. Key interactions between the most potent Compstatin analog that contains a noncanonical amino acid are presented and compared to the most potent analog containing only natural amino acids and native Compstatin. We make the derived parameters and an associated web interface that is capable of performing modifications on proteins using Forcefield_NCAA and outputting AMBER-ready topology and parameter files freely available for academic use at http://selene.princeton.edu/FFNCAA . The forcefield allows one to incorporate these customized amino acids into design applications with control over size, van der Waals, and electrostatic interactions.", "title": "" }, { "docid": "1fbf45145e6ce4b37e3b840a80733ce7", "text": "Ionic liquids (ILs) comprise an extremely broad class of molten salts that are attractive for many practical applications because of their useful combinations of properties [1-3]. The ability to mix and match the cationic and anionic constituents of ILs and functionalize their side chains. These allow amazing tenability of IL properties, including conductivity, viscosi‐ ty, solubility of diverse solutes and miscibility/ immiscibility with a wide range of solvents. [4] Over the past several years, room temperature ILs (RTILs) has generated considerable excitement, as they consist entirely of ions, yet in liquid state and possess minimal vapour pressure. Consequently, ILs can be recycled, thus making synthetic processes less expensive and potentially more efficient and environmentally friendly. Considerable progress has been made using ILs as solvents in the areas of monophasic and biphasic catalysis (homoge‐ neus and heterogeneous).[5-6] The ILs investigated herein provides real practical advantag‐ es over earlier molten salt (high temperature) systems because of their relative insensitivity to air and water. [6-7] A great deal of progress has been made during last five years towards identifying the factors that cause these salts to have low melting points and other useful properties.[8] ILs are subject of intense current interest within the physical chemistry com‐ munity as well. There have been quite a lot of photophysical studies in ionic liquids. [8] The most important properties of ionic liquids are: thermal stability, low vapour pressure, elec‐ tric conductivity, liquid crystal structures, high electro-elasticity, high heat capacity and in‐ flammability properties enable the use of ionic liquids in a wide range of applications, as shown in Figure 1. It is also a suitable solvent for synthesis, [5, 8, 9-12] catalysis [6, 8, 13] and purification. [14-18] It is also used in electrochemical devices and processes, such as re‐ chargeable lithium batteries and electrochemical capacitors, etc.[19] Rechargeable Lithium", "title": "" }, { "docid": "d49d405fc765b647b39dc9ef1b4d6ba9", "text": "The World Wide Web plays an important role while searching for information in the data network. Users are constantly exposed to an ever-growing flood of information. Our approach will help in searching for the exact user relevant content from multiple search engines thus, making the search more efficient and reliable. Our framework will extract the relevant result records based on two approaches i.e. Stored URL list and Run time Generated URL list. Finally, the unique set of records is displayed in a common framework's search result page. The extraction is performed using the concepts of Document Object Model (DOM) tree. The paper comprises of a concept of threshold and data filters to detect and remove irrelevant & redundant data from the web page. The data filters will also be used to further improve the similarity check of data records. Our system will be able to extract 75%-80% user relevant content by eliminating noisy content from the different structured web pages like blogs, forums, articles etc. in the dynamic environment. Our approach shows significant advantages in both precision and recall.", "title": "" }, { "docid": "f159ee79d20f00194402553758bcd031", "text": "Recently, narrowband Internet of Things (NB-IoT), one of the most promising low power wide area (LPWA) technologies, has attracted much attention from both academia and industry. It has great potential to meet the huge demand for machine-type communications in the era of IoT. To facilitate research on and application of NB-IoT, in this paper, we design a system that includes NB devices, an IoT cloud platform, an application server, and a user app. The core component of the system is to build a development board that integrates an NB-IoT communication module and a subscriber identification module, a micro-controller unit and power management modules. We also provide a firmware design for NB device wake-up, data sensing, computing and communication, and the IoT cloud configuration for data storage and analysis. We further introduce a framework on how to apply the proposed system to specific applications. The proposed system provides an easy approach to academic research as well as commercial applications.", "title": "" }, { "docid": "3ed3b4f507c32f6423ca3918fa3eb843", "text": "In recent years, it has been clearly evidenced that most cells in a human being are not human: they are microbial, represented by more than 1000 microbial species. The vast majority of microbial species give rise to symbiotic host-bacterial interactions that are fundamental for human health. The complex of these microbial communities has been defined as microbiota or microbiome. These bacterial communities, forged over millennia of co-evolution with humans, are at the basis of a partnership with the developing human newborn, which is based on reciprocal molecular exchanges and cross-talking. Recent data on the role of the human microbiota in newborns and children clearly indicate that microbes have a potential importance to pediatrics, contributing to host nutrition, developmental regulation of intestinal angiogenesis, protection from pathogens, and development of the immune system. This review is aimed at reporting the most recent data on the knowledge of microbiota origin and development in the human newborn, and on the multiple factors influencing development and maturation of our microbiota, including the use and abuse of antibiotic therapies.", "title": "" }, { "docid": "4d0b04f546ab5c0d79bb066b1431ff51", "text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a17052726cbf3239c3f516b51af66c75", "text": "Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part, or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be “reinventing the wheel”. Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.", "title": "" }, { "docid": "366f31829bb1ac55d195acef880c488e", "text": "Intense competition among a vast number of group-buying websites leads to higher product homogeneity, which allows customers to switch to alternative websites easily and reduce their website stickiness and loyalty. This study explores the antecedents of user stickiness and loyalty and their effects on consumers’ group-buying repurchase intention. Results indicate that systems quality, information quality, service quality, and alternative system quality each has a positive relationship with user loyalty through user stickiness. Meanwhile, information quality directly impacts user loyalty. Thereafter, user stickiness and loyalty each has a positive relationship with consumers’ repurchase intention. Theoretical and managerial implications are also discussed.", "title": "" }, { "docid": "445d57e24150087a866fc34ddb422184", "text": "A survey of the major techniques used in the design of microwave filters is presented in this paper. It is shown that the basis for much fundamental microwave filter theory lies in the realm of lumped-element filters, which indeed are actually used directly for many applications at microwave frequencies as high as 18 GHz. Many types of microwave filters are discussed with the object of pointing out the most useful references, especially for a newcomer to the field.", "title": "" }, { "docid": "8f2a36d188e9efb614d4b324188c83d5", "text": "Neurobiologically inspired algorithms have been developed to continuously learn behavioral patterns at a variety of conceptual, spatial, and temporal levels. In this paper, we outline our use of these algorithms for situation awareness in the maritime domain. Our algorithms take real-time tracking information and learn motion pattern models on-the-fly, enabling the models to adapt well to evolving situations while maintaining high levels of performance. The constantly refined models, resulting from concurrent incremental learning, are used to evaluate the behavior patterns of vessels based on their present motion states. At the event level, learning provides the capability to detect (and alert) upon anomalous behavior. At a higher (inter-event) level, learning enables predictions, over pre-defined time horizons, to be made about future vessel location. Predictions can also be used to alert on anomalous behavior. Learning is context-specific and occurs at multiple levels: for example, for individual vessels as well as classes of vessels. Features and performance of our learning system using recorded data are described", "title": "" }, { "docid": "b1823c456360037d824614a6cf4eceeb", "text": "This paper provides an overview of the Industrial Internet with the emphasis on the architecture, enabling technologies, applications, and existing challenges. The Industrial Internet is enabled by recent rising sensing, communication, cloud computing, and big data analytic technologies, and has been receiving much attention in the industrial section due to its potential for smarter and more efficient industrial productions. With the merge of intelligent devices, intelligent systems, and intelligent decisioning with the latest information technologies, the Industrial Internet will enhance the productivity, reduce cost and wastes through the entire industrial economy. This paper starts by investigating the brief history of the Industrial Internet. We then present the 5C architecture that is widely adopted to characterize the Industrial Internet systems. Then, we investigate the enabling technologies of each layer that cover from industrial networking, industrial intelligent sensing, cloud computing, big data, smart control, and security management. This provides the foundations for those who are interested in understanding the essence and key enablers of the Industrial Internet. Moreover, we discuss the application domains that are gradually transformed by the Industrial Internet technologies, including energy, health care, manufacturing, public section, and transportation. Finally, we present the current technological challenges in developing Industrial Internet systems to illustrate open research questions that need to be addressed to fully realize the potential of future Industrial Internet systems.", "title": "" }, { "docid": "a8699e1ed8391e5a55fbd79ae3ac0972", "text": "The benefits of an e-learning system will not be maximized unless learners use the system. This study proposed and tested alternative models that seek to explain student intention to use an e-learning system when the system is used as a supplementary learning tool within a traditional class or a stand-alone distance education method. The models integrated determinants from the well-established technology acceptance model as well as system and participant characteristics cited in the research literature. Following a demonstration and use phase of the e-learning system, data were collected from 259 college students. Structural equation modeling provided better support for a model that hypothesized stronger effects of system characteristics on e-learning system use. Implications for both researchers and practitioners are discussed. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d6ebe4bacd4a9cea920cfb18aebd5f28", "text": "Page Abstract ............................................................................................................2 Introduction ......................................................................................................2 Key MOSFET Electrical Parameters in Class D Audio Amplifiers ....................2 Drain Source Breakdown Voltage BVDSS................................................2 Static Drain-to-Source On Resistance RDS(on).........................................4 Gate Charge Qg......................................................................................5 Body Diode Reverse Recovery Charge, Qrr ...........................................8 Internal Gate Resistance RG(int)........................................................11 MOSFET Package ..................................................................................11 Maximum Junction Temperature .............................................................12 International Rectifier Digital Audio MOSFET ...................................................13 Conclusions.........................................................................................14 References........................................................................................................14", "title": "" }, { "docid": "0869a75f158b04513c848bc7bfb10e37", "text": "Tracking of multiple objects is an important application in AI City geared towards solving salient problems related to safety and congestion in an urban environment. Frequent occlusion in traffic surveillance has been a major problem in this research field. In this challenge, we propose a model-based vehicle localization method, which builds a kernel at each patch of the 3D deformable vehicle model and associates them with constraints in 3D space. The proposed method utilizes shape fitness evaluation besides color information to track vehicle objects robustly and efficiently. To build 3D car models in a fully unsupervised manner, we also implement evolutionary camera self-calibration from tracking of walking humans to automatically compute camera parameters. Additionally, the segmented foreground masks which are crucial to 3D modeling and camera self-calibration are adaptively refined by multiple-kernel feedback from tracking. For object detection/ classification, the state-of-theart single shot multibox detector (SSD) is adopted to train and test on the NVIDIA AI City Dataset. To improve the accuracy on categories with only few objects, like bus, bicycle and motorcycle, we also employ the pretrained model from YOLO9000 with multiscale testing. We combine the results from SSD and YOLO9000 based on ensemble learning. Experiments show that our proposed tracking system outperforms both state-of-the-art of tracking by segmentation and tracking by detection. Keywords—multiple object tracking, constrained multiple kernels, 3D deformable model, camera self-calibration, adaptive segmentation, object detection, object classification", "title": "" }, { "docid": "8f4c629147db41356763de733aea618b", "text": "The application of simulation software in the planning process is state-of-the-art at many railway infrastructure managers. On the one hand software tools are used to point out the demand for new infrastructure and on the other hand they are used to optimize traffic flow in railway networks by support of the time table related processes. This paper deals with the first application of the software tool called OPENTRACK for simulation of railway operation on an existing line in Croatia from Zagreb to Karlovac. Aim of the work was to find out if the actual version of OPENTRACK able to consider the Croatian signalling system. Therefore the capability arises to use it also for other investigations in railway operation.", "title": "" } ]
scidocsrr
21737b8c6c37aef3918d38a66552e5d2
A unified Bayesian framework for MEG/EEG source imaging
[ { "docid": "62d39d41523bca97939fa6a2cf736b55", "text": "We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.", "title": "" } ]
[ { "docid": "6824f227a05b30b9e09ea9a4d16429b0", "text": "This study presents a Long Short-Term Memory (LSTM) neural network approach to Japanese word segmentation (JWS). Previous studies on Chinese word segmentation (CWS) succeeded in using recurrent neural networks such as LSTM and gated recurrent units (GRU). However, in contrast to Chinese, Japanese includes several character types, such as hiragana, katakana, and kanji, that produce orthographic variations and increase the difficulty of word segmentation. Additionally, it is important for JWS tasks to consider a global context, and yet traditional JWS approaches rely on local features. In order to address this problem, this study proposes employing an LSTMbased approach to JWS. The experimental results indicate that the proposed model achieves state-of-the-art accuracy with respect to various Japanese corpora.", "title": "" }, { "docid": "338efe667e608779f4f41d1cdb1839bb", "text": "In ASP.NET, Programmers maybe use POST or GET to pass parameter's value. Two methods are easy to come true. But In ASP.NET, It is not easy to pass parameter's value. In ASP.NET, Programmers maybe use many methods to pass parameter's value, such as using Application, Session, Querying, Cookies, and Forms variables. In this paper, by way of pass value from WebForm1.aspx to WebForm2.aspx and show out the value on WebForm2. We can give and explain actually examples in ASP.NET language to introduce these methods.", "title": "" }, { "docid": "7e4e5472e5ee0b25511975f3422d2173", "text": "Most people with Parkinson's disease (PD) fall and many experience recurrent falls. The aim of this review was to examine the scope of recurrent falls and to identify factors associated with recurrent fallers. A database search for journal articles which reported prospectively collected information concerning recurrent falls in people with PD identified 22 studies. In these studies, 60.5% (range 35 to 90%) of participants reported at least one fall, with 39% (range 18 to 65%) reporting recurrent falls. Recurrent fallers reported an average of 4.7 to 67.6 falls per person per year (overall average 20.8 falls). Factors associated with recurrent falls include: a positive fall history, increased disease severity and duration, increased motor impairment, treatment with dopamine agonists, increased levodopa dosage, cognitive impairment, fear of falling, freezing of gait, impaired mobility and reduced physical activity. The wide range in the frequency of recurrent falls experienced by people with PD suggests that it would be beneficial to classify recurrent fallers into sub-groups based on fall frequency. Given that there are several factors particularly associated with recurrent falls, fall management and prevention strategies specifically targeting recurrent fallers require urgent evaluation in order to inform clinical practice.", "title": "" }, { "docid": "9869bc5dfc8f20b50608f0d68f7e49ba", "text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.", "title": "" }, { "docid": "0177729f2d7fc610bd8e55a93a93b03b", "text": "Preference-based recommendation systems have transformed how we consume media. By analyzing usage data, these methods uncover our latent preferences for items (such as articles or movies) and form recommendations based on the behavior of others with similar tastes. But traditional preference-based recommendations do not account for the social aspect of consumption, where a trusted friend might point us to an interesting item that does not match our typical preferences. In this work, we aim to bridge the gap between preference- and social-based recommendations. We develop social Poisson factorization (SPF), a probabilistic model that incorporates social network information into a traditional factorization method; SPF introduces the social aspect to algorithmic recommendation. We develop a scalable algorithm for analyzing data with SPF, and demonstrate that it outperforms competing methods on six real-world datasets; data sources include a social reader and Etsy.", "title": "" }, { "docid": "e0633afb6f4dcb1561dbb23b6e3aa713", "text": "Software security vulnerabilities are one of the critical issues in the realm of computer security. Due to their potential high severity impacts, many different approaches have been proposed in the past decades to mitigate the damages of software vulnerabilities. Machine-learning and data-mining techniques are also among the many approaches to address this issue. In this article, we provide an extensive review of the many different works in the field of software vulnerability analysis and discovery that utilize machine-learning and data-mining techniques. We review different categories of works in this domain, discuss both advantages and shortcomings, and point out challenges and some uncharted territories in the field.", "title": "" }, { "docid": "371d71e1f8cb0881e23f2fc1423baca3", "text": "Positional asphyxia refers to a situation where there is compromise of respiration because of splinting of the chest and/or diaphragm preventing normal respiratory excursion, or occlusion of the upper airway due to abnormal positioning of the body. Examination of autopsy files at Forensic Science SA revealed instances where positional asphyxia resulted from inadvertent positioning that compromised respiration due to intoxication, multiple sclerosis, epilepsy, Parkinson disease, Steele-Richardson-Olszewski syndrome, Lafora disease and quadriplegia. While the manner of death was accidental in most cases, in one instance suicide could not be ruled out. We would not exclude the possibility of individuals with significant cardiac disease succumbing to positional asphyxia, as cardiac disease may be either unrelated to the terminal episode or, alternatively, may result in collapse predisposing to positional asphyxia. Victims of positional asphyxia do not extricate themselves from dangerous situations due to impairment of cognitive responses and coordination resulting from intoxication, sedation, neurological diseases, loss of consciousness, physical impairment or physical restraints.", "title": "" }, { "docid": "1466bdb9a7f5662c8a15de9009bc7687", "text": "Mining opinions and analyzing sentiments from social network data help in various fields such as even prediction, analyzing overall mood of public on a particular social issue and so on. This paper involves analyzing the mood of the society on a particular news from Twitter posts. The key idea of the paper is to increase the accuracy of classification by including Natural Language Processing Techniques (NLP) especially semantics and Word Sense Disambiguation. The mined text information is subjected to Ensemble classification to analyze the sentiment. Ensemble classification involves combining the effect of various independent classifiers on a particular classification problem. Experiments conducted demonstrate that ensemble classifier outperforms traditional machine learning classifiers by 3-5%.", "title": "" }, { "docid": "587ee07095b4bd1189e3bb0af215fa95", "text": "This paper discusses dynamic factor analysis, a technique for estimating common trends in multivariate time series. Unlike more common time series techniques such as spectral analysis and ARIMA models, dynamic factor analysis can analyse short, non-stationary time series containing missing values. Typically, the parameters in dynamic factor analysis are estimated by direct optimisation, which means that only small data sets can be analysed if computing time is not to become prohibitively long and the chances of obtaining sub-optimal estimates are to be avoided. This paper shows how the parameters of dynamic factor analysis can be estimated using the EM algorithm, allowing larger data sets to be analysed. The technique is illustrated on a marine environmental data set.", "title": "" }, { "docid": "66fa9b79b1034e1fa3bf19857b5367c2", "text": "We propose a boundedly-rational model of opinion formation in which individuals are subject to persuasion bias; that is, they fail to account for possible repetition in the information they receive. We show that persuasion bias implies the phenomenon of social influence, whereby one’s influence on group opinions depends not only on accuracy, but also on how well-connected one is in the social network that determines communication. Persuasion bias also implies the phenomenon of unidimensional opinions; that is, individuals’ opinions over a multidimensional set of issues converge to a single “left-right” spectrum. We explore the implications of our model in several natural settings, including political science and marketing, and we obtain a number of novel empirical implications. DeMarzo and Zwiebel: Graduate School of Business, Stanford University, Stanford CA 94305, Vayanos: MIT Sloan School of Management, 50 Memorial Drive E52-437, Cambridge MA 02142. This paper is an extensive revision of our paper, “A Model of Persuasion – With Implication for Financial Markets,” (first draft, May 1997). We are grateful to Nick Barberis, Gary Becker, Jonathan Bendor, Larry Blume, Simon Board, Eddie Dekel, Stefano DellaVigna, Darrell Duffie, David Easley, Glenn Ellison, Simon Gervais, Ed Glaeser, Ken Judd, David Kreps, Edward Lazear, George Loewenstein, Lee Nelson, Anthony Neuberger, Matthew Rabin, José Scheinkman, Antoinette Schoar, Peter Sorenson, Pietro Veronesi, Richard Zeckhauser, three anonymous referees, and seminar participants at the American Finance Association Annual Meetings, Boston University, Cornell, Carnegie-Mellon, ESSEC, the European Summer Symposium in Financial Markets at Gerzensee, HEC, the Hoover Institution, Insead, MIT, the NBER Asset Pricing Conference, the Northwestern Theory Summer Workshop, NYU, the Stanford Institute for Theoretical Economics, Stanford, Texas A&M, UCLA, U.C. Berkeley, Université Libre de Bruxelles, University of Michigan, University of Texas at Austin, University of Tilburg, and the Utah Winter Finance Conference for helpful comments and discussions. All errors are our own.", "title": "" }, { "docid": "fc431a3c46bdd4fa4ad83b9af10c0922", "text": "The importance of the kidney's role in glucose homeostasis has gained wider understanding in recent years. Consequently, the development of a new pharmacological class of anti-diabetes agents targeting the kidney has provided new treatment options for the management of type 2 diabetes mellitus (T2DM). Sodium glucose co-transporter type 2 (SGLT2) inhibitors, such as dapagliflozin, canagliflozin, and empagliflozin, decrease renal glucose reabsorption, which results in enhanced urinary glucose excretion and subsequent reductions in plasma glucose and glycosylated hemoglobin concentrations. Modest reductions in body weight and blood pressure have also been observed following treatment with SGLT2 inhibitors. SGLT2 inhibitors appear to be generally well tolerated, and have been used safely when given as monotherapy or in combination with other oral anti-diabetes agents and insulin. The risk of hypoglycemia is low with SGLT2 inhibitors. Typical adverse events appear to be related to the presence of glucose in the urine, namely genital mycotic infection and lower urinary tract infection, and are more often observed in women than in men. Data from long-term safety studies with SGLT2 inhibitors and from head-to-head SGLT2 inhibitor comparator studies are needed to fully determine their benefit-risk profile, and to identify any differences between individual agents. However, given current safety and efficacy data, SGLT2 inhibitors may present an attractive option for T2DM patients who are failing with metformin monotherapy, especially if weight is part of the underlying treatment consideration.", "title": "" }, { "docid": "d041a5fc5f788b1abd8abf35a26cb5d2", "text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.", "title": "" }, { "docid": "a1cd5424dea527e365f038fce60fd821", "text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.", "title": "" }, { "docid": "ead196a54f4ea7b5a1fe4b5b85f0b2c6", "text": "Supervised machine learning and opinion lexicon are the most frequent approaches for opinion mining, but they require considerable effort to prepare the training data and to build the opinion lexicon, respectively. In this paper, a novel unsupervised clustering approach is proposed for opinion mining. Three swarm algorithms based on Particle Swarm Optimization are evaluated using three corpora with different levels of complexity with respect to size, number of opinions, domains, languages, and class balancing. K-means and Agglomerative clustering algorithms, as well as, the Artificial Bee Colony and Cuckoo Search swarm-based algorithms were selected for comparison. The proposed swarm-based algorithms achieved better accuracy using the word bigram feature model as the pre-processing technique, the Global Silhouette as optimization function, and on datasets with two classes: positive and negative. Although the swarm-based algorithms obtained lower result for datasets with three classes, they are still competitive considering that neither labeled data, nor opinion lexicons are required for the opinion clustering approach.", "title": "" }, { "docid": "50f5bb2f0c71bf0d529a0e65cd6066b3", "text": "It would be a significant understatement to say that sales promotion is enjoying a dominant role in the promotional mixes of most consumer goods companies. The 1998 Cox Direct 20th Annual Survey of Promotional Practices suggests that many companies spend as much as 75% of their total promotional budgets on sales promotion and only 25% on advertising. This is up from 57% spent on sales promotions in 1981 (Landler and DeGeorge). The reasons for this unprecedented growth have been welldocumented. Paramount among these is the desire on the part of many organizations for a quick bolstering of sales. The obvious corollary to this is the desire among consumer groups for increased value in the products they buy. Value can be defined as the ratio of perceived benefits to price, and is linked to performance and meeting consumers' expectations (Zeithaml 1988). In today's value-conscious environment, marketers must stress the overall value of their products (Blackwell, Miniard and Engel 2001). Consumers have reported that coupons, price promotions and good value influence 75 80% of their brand choice decisions (Cox 1998). Today, \"many Americans, brought up on a steady diet of commercials, view advertising with cynicism or indifference. With less money to shop, they're far more apt to buy on price\" (Landler and DeGeorge 1991, 68).", "title": "" }, { "docid": "8b64d5f3c59737369e2e6d8a12fc4c20", "text": "A microcontroller based advanced technique of generating sine wave with lowest harmonics is designed and implemented in this paper. The main objective of our proposed technique is to design a low cost, low harmonics voltage source inverter. In our project we used PIC16F73 microcontroller to generate 4 KHz pwm switching signal. The design is essentially focused upon low power electronic appliances such as light, fan, chargers, television etc. In our project we used STP55NF06 NMOSFET, which is a depletion type N channel MOSFET. For driving the MOSFET we used TLP250 and totem pole configuration as a MOSFET driver. The inverter input is 12VDC and its output is 220VAC across a transformer. The complete design is modeled in proteus software and its output is verified practically.", "title": "" }, { "docid": "2fbd1b2e25473affb40990195b26a88b", "text": "In this paper we considerably improve on a state-of-the-art alpha matting approach by incorporating a new prior which is based on the image formation process. In particular, we model the prior probability of an alpha matte as the convolution of a high-resolution binary segmentation with the spatially varying point spread function (PSF) of the camera. Our main contribution is a new and efficient de-convolution approach that recovers the prior model, given an approximate alpha matte. By assuming that the PSF is a kernel with a single peak, we are able to recover the binary segmentation with an MRF-based approach, which exploits flux and a new way of enforcing connectivity. The spatially varying PSF is obtained via a partitioning of the image into regions of similar defocus. Incorporating our new prior model into a state-of-the-art matting technique produces results that outperform all competitors, which we confirm using a publicly available benchmark.", "title": "" }, { "docid": "3bd6674bec87cd46d8e43d4e4ec09574", "text": "We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement/state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.", "title": "" }, { "docid": "5f22c60d28394ff73f7b2b73d68de5a0", "text": "Educational programming environments such as Microsoft Research's Kodu Game Lab are often used to introduce novices to computer science concepts and programming. Unlike many other educational languages that rely on scripting and Java-like syntax, the Kodu language is entirely event-driven and programming takes the form of \"when\" do' clauses. Despite this simplistic programing model, many computer science concepts can be expressed using Kodu. We identify and measure the frequency of these concepts in 346 Kodu programs created by users, and find that most programs exhibit sophistication through the use of complex control flow and boolean logic. Through Kodu's non-traditional language, we show that users express and explore fundamental computer science concepts.", "title": "" }, { "docid": "09dc061dfb788aa8ef2d1e88188157d6", "text": "A wideband dual-polarized slot-coupled stacked patch antenna operating in the UMTS (1920-2170 MHz), WLAN (2.4-2.484 GHz), and UMTS II (2500-2690 MHz) frequency bands is described. Measurements on a prototype of the proposed patch antenna confirm good performance in terms of both impedance matching and isolation", "title": "" } ]
scidocsrr
5ea220687808948ba72476673c98dd8c
End-to-End Learning of Video Super-Resolution with Motion Compensation
[ { "docid": "33de1981b2d9a0aa1955602006d09db9", "text": "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.", "title": "" }, { "docid": "d71040311b8753299377b02023ba5b4c", "text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "title": "" } ]
[ { "docid": "28a9a0d096fa469ed00934336edd3331", "text": "The new generation of field programmable gate array (FPGA) technologies enables an embedded processor intellectual property (IP) and an application IP to be integrated into a system-on-a-programmable-chip (SoPC) developing environment. Therefore, this study presents a speed control integrated circuit (IC) for permanent magnet synchronous motor (PMSM) drive under this SoPC environment. First, the mathematic model of PMSM is defined and the vector control used in the current loop of PMSM drive is explained. Then, an adaptive fuzzy controller adopted to cope with the dynamic uncertainty and external load effect in the speed loop of PMSM drive is proposed. After that, an FPGA-based speed control IC is designed to realize the controllers. The proposed speed control IC has two IPs, a Nios II embedded processor IP and an application IP. The Nios II processor is used to develop the adaptive fuzzy controller in software due to the complicated control algorithm and low sampling frequency control (speed control: 2 kHz). The designed application IP is utilized to implement the current vector controller in hardware owing to the requirement for high sampling frequency control (current loop: 16 kHz, pulsewidth modulation circuit: 4-8 MHz) but simple computation. Finally, an experimental system is set up and some experimental results are demonstrated.", "title": "" }, { "docid": "d9e032e2e80c59125df47328e1f35520", "text": "Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per frame by 54%, 62%, and 87% for three CNNs with less than 1% loss in vision accuracy.", "title": "" }, { "docid": "65d60131b1ceba50399ceffa52de7e8a", "text": "Cox, Matthew L. Miller, and Jeffrey A. Bloom. San Diego, CA: Academic Press, 2002, 576 pp. $69.96 (hardbound). A key ingredient to copyright protection, digital watermarking provides a solution to the illegal copying of material. It also has broader uses in recording and electronic transaction tracking. This book explains “the principles underlying digital watermarking technologies, describes the requirements that have given rise to them, and discusses the diverse ends to which these technologies are being applied.” [book notes] The authors are extensively experienced in digital watermarking technologies. Cox recently joined the NEC Research Institute after a five-year stint at AT&T Bell Labs. Miller’s interest began at AT&T Bell Labs in 1979. He also is employed at NEC. Bloom is a researcher in digital watermarking at the Sarnoff Corporation. His acquaintance with the field began at Signafy, Inc. and continued through his employment at NEC Research Institute. The book features the following: Review of the underlying principles of watermarking relevant for image, video, and audio; Discussion of a wide variety of applications, theoretical principles, detection and embedding concepts, and key properties; Examination of copyright protection and other applications; Presentation of a series of detailed examples that illustrate watermarking concepts and practices; Appendix, in print and on the Web, containing the source code for the examples; Comprehensive glossary of terms. “The authors provide a comprehensive overview of digital watermarking, rife with detailed examples and grounded within strong theoretical framework. Digital Watermarking will serve as a valuable introduction as well as a useful reference for those engaged in the field.”—Walter Bender, Director, M.I.T. Media Lab", "title": "" }, { "docid": "4829d8c0dd21f84c3afbe6e1249d6248", "text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.", "title": "" }, { "docid": "c9fc426722df72b247093779ad6e2c0e", "text": "Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. When the ground conditions and stability constraint are satisfied, it is desirable to select a walking pattern that requires small torque and velocity of the joint actuators. In this paper, we first formulate the constraints of the foot motion parameters. By varying the values of the constraint parameters, we can produce different types of foot motion to adapt to ground conditions. We then propose a method for formulating the problem of the smooth hip motion with the largest stability margin using only two parameters, and derive the hip trajectory by iterative computation. Finally, the correlation between the actuator specifications and the walking patterns is described through simulation studies, and the effectiveness of the proposed methods is confirmed by simulation examples and experimental results.", "title": "" }, { "docid": "c94400a5141e4bf5088ce7a79e3ee162", "text": "Discourse Analysis Introduction. Discourse analysis is the study of language in use. It rests on the basic premise that language cannot be understood without reference to the context, both linguistic and extra-linguistic, in which it is used. It draws from the findings and methodologies of a wide range of fields, such as anthropology, philosophy, sociology, social and cognitive psychology, and artificial intelligence. It is itself a broad field comprised of a large number of linguistic subfields and approaches, including speech act theory, conversation analysis, pragmatics, and the ethnography of speaking. At the same time, the lines between certain linguistic subfields, in particular psycholinguistics, anthropological linguistics, and cognitive linguistics and discourse analysis overlap, and approaches to the study of discourse are informed by these subfields, and in many cases findings are independently corroborated. As a very interdisciplinary approach, the boundaries of this field are fuzzy. 1 The fundamental assumption underlying all approaches to discourse analysis is that language must be studied as it is used, in its context of production, and so the object of analysis is very rarely in the form of a sentence. Instead, written or spoken texts, usually larger than one sentence or one utterance, provide the data. In other words, the discourse analyst works with naturally occurring corpora, and with such corpora come a wide variety of features such as hesitations, non-standard forms, self-corrections, repetitions, incomplete clauses, words, and so—all linguistic material which would be relegated to performance by Chomsky (1965) and so stand outside the scope of analysis for many formal linguists. But for the discourse analyst, such \" performance \" data are 1 It is interesting, in this light, to compare the contents of several standard handbooks of discourse analysis. Brown and Yule (1986) focus heavily on pragmatics and information structure, while Schiffrin (1994) includes several chapters directly related to sociolinguistic methodologies (i.e. chapters on interactional sociolinguistics, ethnomethodology and variation analysis). Mey (1993) has three chapters on conversation analysis (a topic which Schiffrin also covers) and a chapter on \" societal pragmatics. \" Lenore Grenoble, Discourse Analysis 2 indeed relevant and may in fact be the focus of research. The focus on actual instances of language use also means that the analysis does not look at language only as an abstract system; this is a fundamental difference between formal work on syntax versus discourse analysis. This paper first provides an overview of discourse analysis and …", "title": "" }, { "docid": "bb8686ab6443a3e4a24d7185c1584228", "text": "This paper deals with a new 3-degree-of-freedom (DOF) parallel mechanism for a flat-panel TV mounting device with two rotations and one translation. The most important operational requirements of this device are that it should support the heavy weight of the flat-panel TV and should be foldable to save space between the flat panel and the wall. An asymmetric parallel structure that has three kinematic chains with internal four-bar linkage is proposed to meet such requirements. Kinematic modeling was performed along with actuator sizing. Finally, the mechanism was developed and tested to show its effectiveness as a flat-panel TV mounting device.", "title": "" }, { "docid": "1a4659d235395c3a831aacdd53cbf273", "text": "An enduring issue in higher education is student retention to successful graduation. To further this goal, we develop a system for the task of predicting students' course grades for the next enrollment term in a traditional university setting. Each term, students enroll in a limited number of courses and earn grades in the range A-F for each course. Given historical grade data, our task is to predict the grades for each student in the courses they will enroll in during the next term. With this problem formulation, the next-term student grade prediction problem becomes quite similar to a rating prediction or next-basket recommendation problem. The factorization machine (FM), a general-purpose matrix factorization (MF) algorithm suitable for this task, is leveraged as the state-of-the-art method and compared to a variety of other methods. Our experiments show that FMs achieve the lowest prediction error. Results for both cold-start and non-cold-start prediction demonstrate that FMs can be used to accurately predict in both settings. Finally, we identify limitations observed in FMs and the other models tested and discuss directions for future work. To our knowledge, this is the first study that applies state-of-the-art collaborative filtering algorithms to solve the next-term student grade prediction problem.", "title": "" }, { "docid": "9806e837e1d988aa2cfb10e7500d2267", "text": "The high-functioning Autism Spectrum Screening Questionnaire (ASSQ) is a 27-item checklist for completion by lay informants when assessing symptoms characteristic of Asperger syndrome and other high-functioning autism spectrum disorders in children and adolescents with normal intelligence or mild mental retardation. Data for parent and teacher ratings in a clinical sample are presented along with various measures of reliability and validity. Optimal cutoff scores were estimated, using Receiver Operating Characteristic analysis. Findings indicate that the ASSQ is a useful brief screening device for the identification of autism spectrum disorders in clinical settings.", "title": "" }, { "docid": "fa68493c999a154dfc8638aa27255e93", "text": "We develop a kernel density estimation method for estimating the density of points on a network and implement the method in the GIS environment. This method could be applied to, for instance, finding 'hot spots' of traffic accidents, street crimes or leakages in gas and oil pipe lines. We first show that the application of the ordinary two-dimensional kernel method to density estimation on a network produces biased estimates. Second, we formulate a 'natural' extension of the univariate kernel method to density estimation on a network, and prove that its estimator is biased; in particular, it overestimates the densities around nodes. Third, we formulate an unbiased discontinuous kernel function on a network, and fourth, an unbiased continuous kernel function on a network. Fifth, we develop computational methods for these kernels and derive their computational complexity. We also develop a plug-in tool for operating these methods in the GIS environment. Sixth, an application of the proposed methods to the density estimation of bag-snatches on streets is illustrated. Lastly, we summarize the major results and describe some suggestions for the practical use of the proposed methods.", "title": "" }, { "docid": "d7108ba99aaa9231d926a52617baa712", "text": "In this paper, an ultra-compact single-chip solar energy harvesting IC using on-chip solar cell for biomedical implant applications is presented. By employing an on-chip charge pump with parallel connected photodiodes, a 3.5 <inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math></inline-formula> efficiency improvement can be achieved when compared with the conventional stacked photodiode approach to boost the harvested voltage while preserving a single-chip solution. A photodiode-assisted dual startup circuit (PDSC) is also proposed to improve the area efficiency and increase the startup speed by 77%. By employing an auxiliary charge pump (AQP) using zero threshold voltage (ZVT) devices in parallel with the main charge pump, a low startup voltage of 0.25 V is obtained while minimizing the reversion loss. A <inline-formula> <tex-math notation=\"LaTeX\">$4\\, {\\mathbf{V}}_{\\mathbf{in}}$</tex-math></inline-formula> gate drive voltage is utilized to reduce the conduction loss. Systematic charge pump and solar cell area optimization is also introduced to improve the energy harvesting efficiency. The proposed system is implemented in a standard 0.18- <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{m}$</tex-math></inline-formula> CMOS technology and occupies an active area of 1.54 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mm}^{2}$</tex-math></inline-formula>. Measurement results show that the on-chip charge pump can achieve a maximum efficiency of 67%. With an incident power of 1.22 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mW/cm}^{2}$</tex-math></inline-formula> from a halogen light source, the proposed energy harvesting IC can deliver an output power of 1.65 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> at 64% charge pump efficiency. The chip prototype is also verified using <italic>in-vitro</italic> experiment.", "title": "" }, { "docid": "dbf3650aadb4c18500ec3676d23dba99", "text": "Current search engines do not, in general, perform well with longer, more verbose queries. One of the main issues in processing these queries is identifying the key concepts that will have the most impact on effectiveness. In this paper, we develop and evaluate a technique that uses query-dependent, corpus-dependent, and corpus-independent features for automatic extraction of key concepts from verbose queries. We show that our method achieves higher accuracy in the identification of key concepts than standard weighting methods such as inverse document frequency. Finally, we propose a probabilistic model for integrating the weighted key concepts identified by our method into a query, and demonstrate that this integration significantly improves retrieval effectiveness for a large set of natural language description queries derived from TREC topics on several newswire and web collections.", "title": "" }, { "docid": "61f257b3cebc439d7902e6c85b525237", "text": "In this paper, we propose a generalization of the algorithm we developed previously. Along the way, we also develop a theory of quaternionic M symbols whose definition bears some resemblance to the classical M -symbols, except for their combinatorial nature. The theory gives a more efficient way to compute Hilbert modular forms over totally real number fields, especially quadratic fields, and we have illustrated it with several examples. Namely, we have computed all the newforms of prime levels of norm less than 100 over the quadratic fields Q( √ 29) and Q( √ 37), and whose Fourier coefficients are rational or are defined over a quadratic field.", "title": "" }, { "docid": "c87cc578b4a74bae4ea1e0d0d68a6038", "text": "Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand/finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements.", "title": "" }, { "docid": "e135ec51f4406f42625c6610ca926b7b", "text": "Search engines became a de facto place to start information acquisition on the Web. Though due to web spam phenomenon, search results are not always as good as desired. Moreover, spam evolves that makes the problem of providing high quality search even more challenging. Over the last decade research on adversarial information retrieval has gained a lot of interest both from academia and industry. In this paper we present a systematic review of web spam detection techniques with the focus on algorithms and underlying principles. We categorize all existing algorithms into three categories based on the type of information they use: content-based methods, link-based methods, and methods based on non-traditional data such as user behaviour, clicks, HTTP sessions. In turn, we perform a subcategorization of link-based category into five groups based on ideas and principles used: labels propagation, link pruning and reweighting, labels refinement, graph regularization, and featurebased. We also define the concept of web spam numerically and provide a brief survey on various spam forms. Finally, we summarize the observations and underlying principles applied for web spam detection.", "title": "" }, { "docid": "2440e4a18413a6fb0c66327b2e30baca", "text": "In studying grasping and manipulation we find two very different approaches to the subject: knowledge-based approaches based primarily on empirical studies of human grasping and manipulation, and analytical approaches based primarily on physical models of the manipulation process. This chapter begins with a review of studies of human grasping, in particular our development of a grasp taxonomy and an expert system for predicting human grasp choice. These studies show how object geometry and task requirements (as well as hand capabilities and tactile sensing) combine to dictate grasp choice. We then consider analytic models of grasping and manipulation with robotic hands. To keep the mathematics tractable, these models require numerous simplifications which restrict their generality. Despite their differences, the two approaches can be correlated. This provides insight into why people grasp and manipulate objects as they do, and suggests different approaches for robotic grasp and manipulation planning. The results also bear upon such issues such as object representation", "title": "" }, { "docid": "2a325c47252bf239751b46cbd5346a30", "text": "OBJECTIVE To evaluate the larvicidal activity of Azadirachta indica, Melaleuca alternifolia, carapa guianensis essential oils and fermented extract of Carica papaya against Aedes aegypti (Linnaeus, 1762) (Diptera: Culicidae). METHODS The larvicide test was performed in triplicate with 300 larvae for each experimental group using the third larval stage, which were exposed for 24h. The groups were: positive control with industrial larvicide (BTI) in concentrations of 0.37 ppm (PC1) and 0.06 ppm (PC2); treated with compounds of essential oils and fermented extract, 50.0% concentration (G1); treated with compounds of essential oils and fermented extract, 25.0% concentration (G2); treated with compounds of essential oils and fermented extract, 12.5% concentration (G3); and negative control group using water (NC1) and using dimethyl (NC2). The larvae were monitored every 60 min using direct visualization. RESULTS No mortality occurred in experimental groups NC1 and NC2 in the 24h exposure period, whereas there was 100% mortality in the PC1 and PC2 groups compared to NC1 and NC2. Mortality rates of 65.0%, 50.0% and 78.0% were observed in the groups G1, G2 and G3 respectively, compared with NC1 and NC2. CONCLUSIONS The association between three essential oils from Azadirachta indica, Melaleuca alternifolia, Carapa guianensis and fermented extract of Carica papaya was efficient at all concentrations. Therefore, it can be used in Aedes aegypti Liverpool third larvae stage control programs.", "title": "" }, { "docid": "9a1151e45740dfa663172478259b77b6", "text": "Every year, several new ontology matchers are proposed in the literature, each one using a different heuristic, which implies in different performances according to the characteristics of the ontologies. An ontology metamatcher consists of an algorithm that combines several approaches in order to obtain better results in different scenarios. To achieve this goal, it is necessary to define a criterion for the use of matchers. We presented in this work an ontology meta-matcher that combines several ontology matchers making use of the evolutionary meta-heuristic prey-predator as a means of parameterization of the same. Resumo. Todo ano, diversos novos alinhadores de ontologias são propostos na literatura, cada um utilizando uma heurı́stica diferente, o que implica em desempenhos distintos de acordo com as caracterı́sticas das ontologias. Um meta-alinhador consiste de um algoritmo que combina diversas abordagens a fim de obter melhores resultados em diferentes cenários. Para atingir esse objetivo, é necessária a definição de um critério para melhor uso de alinhadores. Neste trabalho, é apresentado um meta-alinhador de ontologias que combina vários alinhadores através da meta-heurı́stica evolutiva presa-predador como meio de parametrização das mesmas.", "title": "" }, { "docid": "1b2c561b6aea994ef50b713f0b5286a1", "text": "This paper presents a novel system architecture applicable to high-performance and flexible transport data processing which includes complex protocol operation and a nehvork control algorithm. We developed a new tightly coupled Held Programmable Gate Array (FPGA) and Micro-Processing Unit (MPU) system named. Yet Another Re-Definable System (YARDS). It comprises three programmable devices which equateto high flexibility. These devices are the RISC-type MPU with memories, programmable inter-connection devices, and FPGAs. Using these, this system supports various styles of coupling between the FPGAs and the MPU which are suitable for constructing transport data processing. In this paper, two applications of the systemin the telecommunications field are given. One is an Operation, Administration, and Management (OAM) cell operations on an AsynchronousTransfer Mode (ATM) network. The other is a dynamic configuration protocol enables the updateor changeof the functions of the transport data processing system on-line. This is the first approach applying the FPGA/MPU hybrid system to the telecommunications field.", "title": "" } ]
scidocsrr
f351f9000e61dad83dc3ea9f7090019f
Bag-of-Vector Embeddings of Dependency Graphs for Semantic Induction
[ { "docid": "0a170051e72b58081ad27e71a3545bcf", "text": "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "title": "" }, { "docid": "5664ca8d7f0f2f069d5483d4a334c670", "text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.", "title": "" } ]
[ { "docid": "8a257223c6d9b5c6c6b17e023f010c66", "text": "Emojis are an extremely common occurrence in mobile communications, but their meaning is open to interpretation. We investigate motivations for their usage in mobile messaging in the US. This study asked 228 participants for the last time that they used one or more emojis in a conversational message, and collected that message, along with a description of the emojis' intended meaning and function. We discuss functional distinctions between: adding additional emotional or situational meaning, adjusting tone, making a message more engaging to the recipient, conversation management, and relationship maintenance. We discuss lexical placement within messages, as well as social practices. We show that the social and linguistic function of emojis are complex and varied, and that supporting emojis can facilitate important conversational functions.", "title": "" }, { "docid": "49910c444cef98bdea4fca1beb8381c3", "text": "This paper introduces the concept of gait transitions, acyclic feedforward motion patterns that allow a robot to switch from one gait to another. Legged robots often utilize collections of gait patterns to locomote over a variety of surfaces. Each feedforward gait is generally tuned for a specific surface and set of operating conditions. To enable locomotion across a changing surface, a robot must be able to stably change between gaits while continuing to locomote. By understanding the fundamentals of gaits, we present methods to correctly transition between differing gaits. On two separate robotic platforms, we show how the application of gait transitions enhances each robot's behavioral suite. Using the RHex robotic hexapod, gait transitions are used to smoothly switch from a tripod walking gait to a metachronal wave gait used to climb stairs. We also introduce the RiSE platform, a hexapod robot capable of vertical climbing, and discuss how gait transitions play an important role in achieving vertical mobility", "title": "" }, { "docid": "07239163734357138011bbcc7b9fd38f", "text": "Open cross-section, thin-walled, cold-formed steel columns have at least three competing buckling modes: local, dis and Euler~i.e., flexural or flexural-torsional ! buckling. Closed-form prediction of the buckling stress in the local mode, includ interaction of the connected elements, and the distortional mode, including consideration of the elastic and geometric stiffne web/flange juncture, are provided and shown to agree well with numerical methods. Numerical analyses and experiments postbuckling capacity in the distortional mode is lower than in the local mode. Current North American design specificati cold-formed steel columns ignore local buckling interaction and do not provide an explicit check for distortional buckling. E experiments on cold-formed channel, zed, and rack columns indicate inconsistency and systematic error in current design me provide validation for alternative methods. A new method is proposed for design that explicitly incorporates local, distortional an buckling, does not require calculations of effective width and/or effective properties, gives reliable predictions devoid of systema and provides a means to introduce rational analysis for elastic buckling prediction into the design of thin-walled columns. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:3~289! CE Database keywords: Thin-wall structures; Columns; Buckling; Cold-formed steel.", "title": "" }, { "docid": "2d981243bfb30196474d5855043fa7b7", "text": "Gamification, an application of game design elements to non-gaming contexts, is proposed as a way to add engagement in technology-mediated training programs. But there is hardly any information on how to adapt game design elements to improve learning outcomes and promote learner engagement. To address the issue, we focus on a popular game design element, competition, and specifically examine the effects of different competitive structures – whether a person faces a higher-skilled, lower-skilled, or equally-skilled competitor – on learning and engagement. We study a gamified training design for databases, where trainees play a trivia-based mini-game with a competitor after each e-training module. Trainees who faced a lower-skilled competitor reported higher self-efficacy beliefs and better learning outcomes, supporting the effect of peer appraisal, a less examined aspect of social cognitive theory. But trainees who faced equallyskilled competitors reported higher levels of engagement, supporting the balance principle of flow theory. Our study findings indicate that no one competitive structure can address learning and engagement outcomes simultaneously. The choice of competitive structures depends on the priority of the outcomes in training. Our findings provide one explanation for the mixed findings on the effect of competitive gamification designs in technology mediated training.", "title": "" }, { "docid": "35c18e570a6ab44090c1997e7fe9f1b4", "text": "Online information maintenance through cloud applications allows users to store, manage, control and share their information with other users as well as Cloud service providers. There have been serious privacy concerns about outsourcing user information to cloud servers. But also due to an increasing number of cloud data security incidents happened in recent years. Proposed system is a privacy-preserving system using Attribute based Multifactor Authentication. Proposed system provides privacy to users data with efficient authentication and store them on cloud servers such that servers do not have access to sensitive user information. Meanwhile users can maintain full control over access to their uploaded ?les and data, by assigning ?ne-grained, attribute-based access privileges to selected files and data, while di?erent users can have access to di?erent parts of the System. This application allows clients to set privileges to different users to access their data.", "title": "" }, { "docid": "01c5231566670caa9a0ca94f8f5ef558", "text": "In recent years, many volumetric illumination models have been proposed, which have the potential to simulate advanced lighting effects and thus support improved image comprehension. Although volume ray-casting is widely accepted as the volume rendering technique which achieves the highest image quality, so far no volumetric illumination algorithm has been designed to be directly incorporated into the ray-casting process. In this paper we propose image plane sweep volume illumination (IPSVI), which allows the integration of advanced illumination effects into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. Thus, we are able to reduce the problem complexity and achieve interactive frame rates, while supporting scattering as well as shadowing. Since all illumination computations are performed directly within a single rendering pass, IPSVI does not require any preprocessing nor does it need to store intermediate results within an illumination volume. It therefore has a significantly lower memory footprint than other techniques. This makes IPSVI directly applicable to large data sets. Furthermore, the integration into a GPU-based ray-caster allows for high image quality as well as improved rendering performance by exploiting early ray termination. This paper discusses the theory behind IPSVI, describes its implementation, demonstrates its visual results and provides performance measurements.", "title": "" }, { "docid": "2820f1623ab5c17e18c8a237156c2d36", "text": "In a two-tier heterogeneous network (HetNet) where small base stations (SBSs) coexist with macro base stations (MBSs), the SBSs may suffer significant performance degradation due to the inter- and intra-tier interferences. Introducing cognition into the SBSs through the spectrum sensing (e.g., carrier sensing) capability helps them detecting the interference sources and avoiding them via opportunistic access to orthogonal channels. In this paper, we use stochastic geometry to model and analyze the performance of two cases of cognitive SBSs in a multichannel environment, namely, the semi-cognitive case and the full-cognitive case. In the semi-cognitive case, the SBSs are only aware of the interference from the MBSs, hence, only inter-tier interference is minimized. On the other hand, in the full-cognitive case, the SBSs access the spectrum via a contention resolution process, hence, both the intra- and intertier interferences are minimized, but at the expense of reduced spectrum access opportunities. We quantify the performance gain in outage probability obtained by introducing cognition into the small cell tier for both the cases. We will focus on a special type of SBSs called the femto access points (FAPs) and also capture the effect of different admission control policies, namely, the open-access and closed-access policies. We show that a semi-cognitive SBS always outperforms a full-cognitive SBS and that there exists an optimal spectrum sensing threshold for the cognitive SBSs which can be obtained via the analytical framework presented in this paper.", "title": "" }, { "docid": "f578c9ea0ac7f28faa3d9864c0e43711", "text": "Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph convolutional networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.", "title": "" }, { "docid": "0dcfd748b2ea70de8b84b9056eb79fc4", "text": "The number of resource-limited wireless devices utilized in many areas of Internet of Things is growing rapidly; there is a concern about privacy and security. Various lightweight block ciphers are proposed; this work presents a modified lightweight block cipher algorithm. A Linear Feedback Shift Register is used to replace the key generation function in the XTEA1 Algorithm. Using the same evaluation conditions, we analyzed the software implementation of the modified XTEA using FELICS (Fair Evaluation of Lightweight Cryptographic Systems) a benchmarking framework which calculates RAM footprint, ROM occupation and execution time on three largely used embedded devices: 8-bit AVR microcontroller, 16-bit MSP microcontroller and 32-bit ARM microcontroller. Implementation results show that it provides less software requirements compared to original XTEA. We enhanced the security level and the software performance.", "title": "" }, { "docid": "b9c54211575909291cbd4428781a3b05", "text": "The purpose is to arrive at recognition of multicolored objects invariant to a substantial change in viewpoint, object geometry and illumination. Assuming dichromatic reflectance and white illumination, it is shown that normalized color rgb, saturation S and hue H, and the newly proposed color models c 1 c 2 c 3 and l 1 l 2 l 3 are all invariant to a change in viewing direction, object geometry and illumination. Further, it is shown that hue H and l 1 l 2 l 3 are also invariant to highlights. Finally, a change in spectral power distribution of the illumination is considered to propose a new color constant color model m 1 m 2 m 3 . To evaluate the recognition accuracy differentiated for the various color models, experiments have been carried out on a database consisting of 500 images taken from 3-D multicolored man-made objects. The experimental results show that highest object recognition accuracy is achieved by l 1 l 2 l 3 and hue H followed by c 1 c 2 c 3 , normalized color rgb and m 1 m 2 m 3 under the constraint of white illumination. Also, it is demonstrated that recognition accuracy degrades substantially for all color features other than m 1 m 2 m 3 with a change in illumination color. The recognition scheme and images are available within the PicToSeek and Pic2Seek systems on-line at: http: //www.wins.uva.nl/research/isis/zomax/. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "0b28e0e8637a666d616a8c360d411193", "text": "As a novel dynamic network service infrastructure, Internet of Things (IoT) has gained remarkable popularity with obvious superiorities in the interoperability and real-time communication. Despite of the convenience in collecting information to provide the decision basis for the users, the vulnerability of embedded sensor nodes in multimedia devices makes the malware propagation a growing serious problem, which would harm the security of devices and their users financially and physically in wireless multimedia system (WMS). Therefore, many researches related to the malware propagation and suppression have been proposed to protect the topology and system security of wireless multimedia network. In these studies, the epidemic model is of great significance to the analysis of malware propagation. Considering the cloud and state transition of sensor nodes, a cloud-assisted model for malware detection and the dynamic differential game against malware propagation are proposed in this paper. Firstly, a SVM based malware detection model is constructed with the data sharing at the security platform in the cloud. Then the number of malware-infected nodes with physical infectivity to susceptible nodes is calculated precisely based on the attributes of WMS transmission. Then the state transition among WMS devices is defined by the modified epidemic model. Furthermore, a dynamic differential game and target cost function are successively derived for the Nash equilibrium between malware and WMS system. On this basis, a saddle-point malware detection and suppression algorithm is presented depending on the modified epidemic model and the computation of optimal strategies. Numerical results and comparisons show that the proposed algorithm can increase the utility of WMS efficiently and effectively.", "title": "" }, { "docid": "25deed9855199ef583524a2eef0456f0", "text": "We introduce a method for creating very dense reconstructions of datasets, particularly turn-table varieties. The method takes in initial reconstructions (of any origin) and makes them denser by interpolating depth values in two-dimensional image space within a superpixel region and then optimizing the interpolated value via image consistency analysis across neighboring images in the dataset. One of the core assumptions in this method is that depth values per pixel will vary gradually along a gradient for a given object. As such, turntable datasets, such as the dinosaur dataset, are particularly easy for our method. Our method modernizes some existing techniques and parallelizes them on a GPU, which produces results faster than other densification methods.", "title": "" }, { "docid": "2b8305c10f1105905f2a2f9651cb7c9f", "text": "Many distributed collective decision-making processes must balance diverse individual preferences with a desire for collective unity. We report here on an extensive session of behavioral experiments on biased voting in networks of individuals. In each of 81 experiments, 36 human subjects arranged in a virtual network were financially motivated to reach global consensus to one of two opposing choices. No payments were made unless the entire population reached a unanimous decision within 1 min, but different subjects were paid more for consensus to one choice or the other, and subjects could view only the current choices of their network neighbors, thus creating tensions between private incentives and preferences, global unity, and network structure. Along with analyses of how collective and individual performance vary with network structure and incentives generally, we find that there are well-studied network topologies in which the minority preference consistently wins globally; that the presence of \"extremist\" individuals, or the awareness of opposing incentives, reliably improve collective performance; and that certain behavioral characteristics of individual subjects, such as \"stubbornness,\" are strongly correlated with earnings.", "title": "" }, { "docid": "b347cea48fea5341737e315535ea57e5", "text": "1 EXTENDED ABSTRACT Real world interactions are full of coordination problems [2, 3, 8, 14, 15] and thus constructing agents that can solve them is an important problem for artificial intelligence research. One of the simplest, most heavily studied coordination problems is the matrixform, two-player Stag Hunt. In the Stag Hunt, each player makes a choice between a risky action (hunt the stag) and a safe action (forage for mushrooms). Foraging for mushrooms always yields a safe payoff while hunting yields a high payoff if the other player also hunts but a very low payoff if one shows up to hunt alone. This game has two important Nash equilibria: either both players show up to hunt (this is called the payoff dominant equilibrium) or both players stay home and forage (this is called the risk-dominant equilibrium [7]). In the Stag Hunt, when the payoff to hunting alone is sufficiently low, dyads of learners as well as evolving populations converge to the risk-dominant (safe) equilibrium [6, 8, 10, 11]. The intuition here is that even a slight amount of doubt about whether one’s partner will show up causes an agent to choose the safe action. This in turn causes partners to be less likely to hunt in the future and the system trends to the inefficient equilibrium. We are interested in the problem of agent design: our task is to construct an agent that will go into an initially poorly understood environment and make decisions. Our agent must learn from its experiences to update its policy and maximize some scalar reward. However, there will also be other agents which we do not control. These agents will also learn from their experiences. We ask: if the environment has Stag Hunt-like properties, can we make changes to our agent’s learning to improve its outcomes? We focus on reinforcement learning (RL), however, many of our results should generalize to other learning algorithms.", "title": "" }, { "docid": "9f5f79a19d3a181f5041a7b5911db03a", "text": "BACKGROUND\nNucleoside analogues against herpes simplex virus (HSV) have been shown to suppress shedding of HSV type 2 (HSV-2) on genital mucosal surfaces and may prevent sexual transmission of HSV.\n\n\nMETHODS\nWe followed 1484 immunocompetent, heterosexual, monogamous couples: one with clinically symptomatic genital HSV-2 and one susceptible to HSV-2. The partners with HSV-2 infection were randomly assigned to receive either 500 mg of valacyclovir once daily or placebo for eight months. The susceptible partner was evaluated monthly for clinical signs and symptoms of genital herpes. Source partners were followed for recurrences of genital herpes; 89 were enrolled in a substudy of HSV-2 mucosal shedding. Both partners were counseled on safer sex and were offered condoms at each visit. The predefined primary end point was the reduction in transmission of symptomatic genital herpes.\n\n\nRESULTS\nClinically symptomatic HSV-2 infection developed in 4 of 743 susceptible partners who were given valacyclovir, as compared with 16 of 741 who were given placebo (hazard ratio, 0.25; 95 percent confidence interval, 0.08 to 0.75; P=0.008). Overall, acquisition of HSV-2 was observed in 14 of the susceptible partners who received valacyclovir (1.9 percent), as compared with 27 (3.6 percent) who received placebo (hazard ratio, 0.52; 95 percent confidence interval, 0.27 to 0.99; P=0.04). HSV DNA was detected in samples of genital secretions on 2.9 percent of the days among the HSV-2-infected (source) partners who received valacyclovir, as compared with 10.8 percent of the days among those who received placebo (P<0.001). The mean rates of recurrence were 0.11 per month and 0.40 per month, respectively (P<0.001).\n\n\nCONCLUSIONS\nOnce-daily suppressive therapy with valacyclovir significantly reduces the risk of transmission of genital herpes among heterosexual, HSV-2-discordant couples.", "title": "" }, { "docid": "332a30e8d03d4f8cc03e7ab9b809ec9f", "text": "The study of electromyographic (EMG) signals has gained increased attention in the last decades since the proper analysis and processing of these signals can be instrumental for the diagnosis of neuromuscular diseases and the adaptive control of prosthetic devices. As a consequence, various pattern recognition approaches, consisting of different modules for feature extraction and classification of EMG signals, have been proposed. In this paper, we conduct a systematic empirical study on the use of Fractal Dimension (FD) estimation methods as feature extractors from EMG signals. The usage of FD as feature extraction mechanism is justified by the fact that EMG signals usually show traces of selfsimilarity and by the ability of FD to characterize and measure the complexity inherent to different types of muscle contraction. In total, eight different methods for calculating the FD of an EMG waveform are considered here, and their performance as feature extractors is comparatively assessed taking into account nine well-known classifiers of different types and complexities. Results of experiments conducted on a dataset involving seven distinct types of limb motions are reported whereby we could observe that the normalized version of the Katz's estimation method and the Hurst exponent significantly outperform the others according to a class separability measure and five well-known accuracy measures calculated over the induced classifiers. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f5c4bdf959e455193221a1fa76e1895a", "text": "This book contains a wide variety of hot topics on advanced computational intelligence methods which incorporate the concept of complex and hypercomplex number systems into the framework of artificial neural networks. In most chapters, the theoretical descriptions of the methodology and its applications to engineering problems are excellently balanced. This book suggests that a better information processing method could be brought about by selecting a more appropriate information representation scheme for specific problems, not only in artificial neural networks but also in other computational intelligence frameworks. The advantages of CVNNs and hypercomplex-valued neural networks over real-valued neural networks are confirmed in some case studies but still unclear in general. Hence, there is a need to further explore the difference between them from the viewpoint of nonlinear dynamical systems. Nevertheless, it seems that the applications of CVNNs and hypercomplex-valued neural networks are very promising.", "title": "" }, { "docid": "14835b93b580081b0398e5e370b72c2c", "text": "In order for autonomous vehicles to achieve life-long operation in outdoor environments, navigation systems must be able to cope with visual change—whether it’s short term, such as variable lighting or weather conditions, or long term, such as different seasons. As a Global Positioning System (GPS) is not always reliable, autonomous vehicles must be self sufficient with onboard sensors. This thesis examines the problem of localisation against a known map across extreme lighting and weather conditions using only a stereo camera as the primary sensor. The method presented departs from traditional techniques that blindly apply out-of-the-box interest-point detectors to all images of all places. This naive approach fails to take into account any prior knowledge that exists about the environment in which the robot is operating. Furthermore, the point-feature approach often fails when there are dramatic appearance changes, as associating low-level features such as corners or edges is extremely difficult and sometimes not possible. By leveraging knowledge of prior appearance, this thesis presents an unsupervised method for learning a set of distinctive and stable (i.e., stable under appearance changes) feature detectors that are unique to a specific place in the environment. In other words, we learn place-dependent feature detectors that enable vastly superior performance in terms of robustness in exchange for a reduced, but tolerable metric precision. By folding in a method for masking distracting objects in dynamic environments and examining a simple model for external illuminates, such as the sun, this thesis presents a robust localisation system that is able to achieve metric estimates from night-today or summer-to-winter conditions. Results are presented from various locations in the UK, including the Begbroke Science Park, Woodstock, Oxford, and central London. Statement of Authorship This thesis is submitted to the Department of Engineering Science, University of Oxford, in fulfilment of the requirements for the degree of Doctor of Philosophy. This thesis is entirely my own work, and except where otherwise stated, describes my own research. Colin McManus, Lady Margaret Hall Funding The work described in this thesis was funded by Nissan Motors.", "title": "" }, { "docid": "4973ce25e2a638c3923eda62f92d98b2", "text": "About 20 ethnic groups reside in Mongolia. On the basis of genetic and anthropological studies, it is believed that Mongolians have played a pivotal role in the peopling of Central and East Asia. However, the genetic relationships among these ethnic groups have remained obscure, as have their detailed relationships with adjacent populations. We analyzed 16 binary and 17 STR polymorphisms of human Y chromosome in 669 individuals from nine populations, including four indigenous ethnic groups in Mongolia (Khalkh, Uriankhai, Zakhchin, and Khoton). Among these four Mongolian populations, the Khalkh, Uriankhai, and Zakhchin populations showed relatively close genetic affinities to each other and to Siberian populations, while the Khoton population showed a closer relationship to Central Asian populations than to even the other Mongolian populations. These findings suggest that the major Mongolian ethnic groups have a close genetic affinity to populations in northern East Asia, although the genetic link between Mongolia and Central Asia is not negligible.", "title": "" }, { "docid": "f79472b17396fd180821b0c02fe92939", "text": "Bull breeds are commonly kept as companion animals, but the pit bull terrier is restricted by breed-specific legislation (BSL) in parts of the United States and throughout the United Kingdom. Shelter workers must decide which breed(s) a dog is. This decision may influence the dog's fate, particularly in places with BSL. In this study, shelter workers in the United States and United Kingdom were shown pictures of 20 dogs and were asked what breed each dog was, how they determined each dog's breed, whether each dog was a pit bull, and what they expected the fate of each dog to be. There was much variation in responses both between and within the United States and United Kingdom. UK participants frequently labeled dogs commonly considered by U.S. participants to be pit bulls as Staffordshire bull terriers. UK participants were more likely to say their shelters would euthanize dogs deemed to be pit bulls. Most participants noted using dogs' physical features to determine breed, and 41% affected by BSL indicated they would knowingly mislabel a dog of a restricted breed, presumably to increase the dog's adoption chances.", "title": "" } ]
scidocsrr
eaa855dcfd8fbda651d2287a26470b1b
SEMANTIC WEB MINING FOR INTELLIGENT WEB PERSONALIZATION
[ { "docid": "bed9bdf4d4965610b85378f2fdbfab2a", "text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.", "title": "" } ]
[ { "docid": "553d7f8c6b4c04349b65379e1e6cb0d8", "text": "Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well.", "title": "" }, { "docid": "b3a6bc6036376d33ef78896f21778a21", "text": "Document clustering has many important applications in the area of data mining and information retrieval. Many existing document clustering techniques use the “bag-of-words” model to represent the content of a document. However, this representation is only effective for grouping related documents when these documents share a large proportion of lexically equivalent terms. In other words, instances of synonymy between related documents are ignored, which can reduce the effectiveness of applications using a standard full-text document representation. To address this problem, we present a new approach for clustering scientific documents, based on the utilization of citation contexts. A citation context is essentially the text surrounding the reference markers used to refer to other scientific works. We hypothesize that citation contexts will provide relevant synonymous and related vocabulary which will help increase the effectiveness of the bag-of-words representation. In this paper, we investigate the power of these citation-specific word features, and compare them with the original document’s textual representation in a document clustering task on two collections of labeled scientific journal papers from two distinct domains: High Energy Physics and Genomics. We also compare these text-based clustering techniques with a link-based clustering algorithm which determines the similarity between documents based on the number of co-citations, that is in-links represented by citing documents and out-links represented by cited documents. Our experimental results indicate that the use of citation contexts, when combined with the vocabulary in the full-text of the document, is a promising alternative means of capturing critical topics covered by journal articles. More specifically, this document representation strategy when used by the clustering algorithm investigated in this paper, outperforms both the full-text clustering approach and the link-based clustering technique on both scientific journal datasets.", "title": "" }, { "docid": "1a86b10556b5e38823bbb1aadb5fb378", "text": "The advances in the field of machine learning using neuromorphic systems have paved the pathway for extensive research on possibilities of hardware implementations of neural networks. Various memristive technologies such as oxide-based devices, spintronics, and phase change materials have been explored to implement the core functional units of neuromorphic systems, namely the synaptic network, and the neuronal functionality, in a fast and energy efficient manner. However, various nonidealities in the crossbar implementations of the synaptic arrays can significantly degrade performance of neural networks, and hence, impose restrictions on feasible crossbar sizes. In this paper, we build mathematical models of various nonidealities that occur in crossbar implementations such as source resistance, neuron resistance, and chip-to-chip device variations and analyze their impact on the classification accuracy of a fully connected network (FCN) and convolutional neural network (CNN) trained with Backpropagation algorithm. We show that a network trained under ideal conditions can suffer accuracy degradation as large as 59.84% for FCNs and 62.4% for CNNs when implemented on nonideal crossbars for relevant nonideality ranges. This severely constrains the sizes for crossbars. As a solution, we propose a technology aware training algorithm, which incorporates the mathematical models of the nonidealities in the backpropagation algorithm. We demonstrate that our proposed methodology achieves significant recovery of testing accuracy within 1.9% of the ideal accuracy for FCNs and 1.5% for CNNs. We further show that our proposed training algorithm can potentially allow the use of significantly larger crossbar arrays of sizes 784 × 500 for FCNs and 4096 × 512 for CNNs with a minor or no tradeoff in accuracy.", "title": "" }, { "docid": "c194e9c91d4a921b42ddacfc1d5a214f", "text": "Smartphone applications' energy efficiency is vital, but many Android applications suffer from serious energy inefficiency problems. Locating these problems is labor-intensive and automated diagnosis is highly desirable. However, a key challenge is the lack of a decidable criterion that facilitates automated judgment of such energy problems. Our work aims to address this challenge. We conducted an in-depth study of 173 open-source and 229 commercial Android applications, and observed two common causes of energy problems: missing deactivation of sensors or wake locks, and cost-ineffective use of sensory data. With these findings, wepropose an automated approach to diagnosing energy problems in Android applications. Our approach explores an application's state space by systematically executing the application using Java PathFinder (JPF). It monitors sensor and wake lock operations to detect missing deactivation of sensors and wake locks. It also tracks the transformation and usage of sensory data and judges whether they are effectively utilized by the application using our state-sensitive data utilization metric. In this way, our approach can generate detailed reports with actionable information to assist developers in validating detected energy problems. We built our approach as a tool, GreenDroid, on top of JPF. Technically, we addressed the challenges of generating user interaction events and scheduling event handlers in extending JPF for analyzing Android applications. We evaluated GreenDroid using 13 real-world popular Android applications. GreenDroid completed energy efficiency diagnosis for these applications in a few minutes. It successfully located real energy problems in these applications, and additionally found new unreported energy problems that were later confirmed by developers.", "title": "" }, { "docid": "647d93e8fce72c5669dcc9cf7a9c255c", "text": "Scientific evidence based on neuroimaging approaches over the last decade has demonstrated the efficacy of physical activity improving cognitive health across the human lifespan. Aerobic fitness spares age-related loss of brain tissue during aging, and enhances functional aspects of higher order regions involved in the control of cognition. More active or higher fit individuals are capable of allocating greater attentional resources toward the environment and are able to process information more quickly. These data are suggestive that aerobic fitness enhances cognitive strategies enabling to respond effectively to an imposed challenge with a better yield in task performance. In turn, animal studies have shown that exercise has a benevolent action on health and plasticity of the nervous system. New evidence indicates that exercise exerts its effects on cognition by affecting molecular events related to the management of energy metabolism and synaptic plasticity. An important instigator in the molecular machinery stimulated by exercise is brain-derived neurotrophic factor, which acts at the interface of metabolism and plasticity. Recent studies show that exercise collaborates with other aspects of lifestyle to influence the molecular substrates of cognition. In particular, select dietary factors share similar mechanisms with exercise, and in some cases they can complement the action of exercise. Therefore, exercise and dietary management appear as a noninvasive and effective strategy to counteract neurological and cognitive disorders.", "title": "" }, { "docid": "ac8dd0134ce110e8a662f7f9ded9f5c0", "text": "In this paper, we present a data acquisition and analysis framework for materials-to-devices processes, named 4CeeD, that focuses on the immense potential of capturing, accurately curating, correlating, and coordinating materials-to-devices digital data in a real-time and trusted manner before fully archiving and publishing them for wide access and sharing. In particular, 4CeeD consists of novel services: a curation service for collecting data from microscopes and fabrication instruments, curating, and wrapping of data with extensive metadata in real-time and in a trusted manner, and a cloud-based coordination service for storing data, extracting meta-data, analyzing and finding correlations among the data. Our evaluation results show that our novel cloud framework can help researchers significantly save time and cost spent on experiments, and is efficient in dealing with high-volume and fast-changing workload of heterogeneous types of experimental data.", "title": "" }, { "docid": "c898f6186ff15dff41dcb7b3376b975d", "text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.", "title": "" }, { "docid": "00801556f47ccd22804a81babd53dca7", "text": "BACKGROUND\nFood product reformulation is seen as one among several tools to promote healthier eating. Reformulating the recipe for a processed food, e.g. reducing the fat, sugar or salt content of the foods, or increasing the content of whole-grains, can help the consumers to pursue a healthier life style. In this study, we evaluate the effects on calorie sales of a 'silent' reformulation strategy, where a retail chain's private-label brands are reformulated to a lower energy density without making specific claims on the product.\n\n\nMETHODS\nUsing an ecological study design, we analyse 52 weeks' sales data - enriched with data on products' energy density - from a Danish retail chain. Sales of eight product categories were studied. Within each of these categories, specific products had been reformulated during the 52 weeks data period. Using econometric methods, we decompose the changes in calorie turnover and sales value into direct and indirect effects of product reformulation.\n\n\nRESULTS\nFor all considered products, the direct effect of product reformulation was a reduction in the sale of calories from the respective product categories - between 0.5 and 8.2%. In several cases, the reformulation led to indirect substitution effects that were counterproductive with regard to reducing calorie turnover. However, except in two insignificant cases, these indirect substitution effects were dominated by the direct effect of the reformulation, leading to net reductions in calorie sales between -3.1 and 7.5%. For all considered product reformulations, the reformulation had either positive, zero or very moderate negative effects on the sales value of the product category to which the reformulated product belonged.\n\n\nCONCLUSIONS\nBased on these findings, 'silent' reformulation of retailer's private brands towards lower energy density seems to contribute to lowering the calorie intake in the population (although to a moderate extent) with moderate losses in retailer's sales revenues.", "title": "" }, { "docid": "9493fa9f3749088462c1af7b34d9cfc9", "text": "Computer vision assisted diagnostic systems are gaining popularity in different healthcare applications. This paper presents a video analysis and pattern recognition framework for the automatic grading of vertical suspension tests on infants during the Hammersmith Infant Neurological Examination (HINE). The proposed vision-guided pipeline applies a color-based skin region segmentation procedure followed by the localization of body parts before feature extraction and classification. After constrained localization of lower body parts, a stick-diagram representation is used for extracting novel features that correspond to the motion dynamic characteristics of the infant's leg movements during HINE. This set of pose features generated from such a representation includes knee angles and distances between knees and hills. Finally, a time-series representation of the feature vector is used to train a Hidden Markov Model (HMM) for classifying the grades of the HINE tests into three predefined categories. Experiments are carried out by testing the proposed framework on a large number of vertical suspension test videos recorded at a Neuro-development clinic. The automatic grading results obtained from the proposed method matches the scores of experts at an accuracy of 74%.", "title": "" }, { "docid": "3ae6440666a5ea56dee2000991a50444", "text": "Flexible medical robots can improve surgical procedures by decreasing invasiveness and increasing accessibility within the body. Using preoperative images, these robots can be designed to optimize a procedure for a particular patient. To minimize invasiveness and maximize biocompatibility, the actuation units of flexible medical robots should be placed fully outside the patient's body. In this letter, we present a novel, compact, lightweight, modular actuation, and control system for driving a class of these flexible robots, known as concentric tube robots. A key feature of the design is the use of three-dimensional printed waffle gears to enable compact control of two degrees of freedom within each module. We measure the precision and accuracy of a single actuation module and demonstrate the ability of an integrated set of three actuation modules to control six degrees of freedom. The integrated system drives a three-tube concentric tube robot to reach a final tip position that is on average less than 2 mm from a given target. In addition, we show a handheld manifestation of the device and present its potential applications.", "title": "" }, { "docid": "3378680ac3eddfde464e1be5ee6986e6", "text": "Boundaries between formal and informal learning settings are shaped by influences beyond learners’ control. This can lead to the proscription of some familiar technologies that learners may like to use from some learning settings. This contested demarcation is not well documented. In this paper, we introduce the term ‘digital dissonance’ to describe this tension with respect to learners’ appropriation of Web 2.0 technologies in formal contexts. We present the results of a study that explores learners’ inand out-of-school use of Web 2.0 and related technologies. The study comprises two data sources: a questionnaire and a mapping activity. The contexts within which learners felt their technologies were appropriate or able to be used are also explored. Results of the study show that a sense of ‘digital dissonance’ occurs around learners’ experience of Web 2.0 activity in and out of school. Many learners routinely cross institutionally demarcated boundaries, but the implications of this activity are not well understood by institutions or indeed by learners themselves. More needs to be understood about the transferability of Web 2.0 skill sets and ways in which these can be used to support formal learning.", "title": "" }, { "docid": "fc431a3c46bdd4fa4ad83b9af10c0922", "text": "The importance of the kidney's role in glucose homeostasis has gained wider understanding in recent years. Consequently, the development of a new pharmacological class of anti-diabetes agents targeting the kidney has provided new treatment options for the management of type 2 diabetes mellitus (T2DM). Sodium glucose co-transporter type 2 (SGLT2) inhibitors, such as dapagliflozin, canagliflozin, and empagliflozin, decrease renal glucose reabsorption, which results in enhanced urinary glucose excretion and subsequent reductions in plasma glucose and glycosylated hemoglobin concentrations. Modest reductions in body weight and blood pressure have also been observed following treatment with SGLT2 inhibitors. SGLT2 inhibitors appear to be generally well tolerated, and have been used safely when given as monotherapy or in combination with other oral anti-diabetes agents and insulin. The risk of hypoglycemia is low with SGLT2 inhibitors. Typical adverse events appear to be related to the presence of glucose in the urine, namely genital mycotic infection and lower urinary tract infection, and are more often observed in women than in men. Data from long-term safety studies with SGLT2 inhibitors and from head-to-head SGLT2 inhibitor comparator studies are needed to fully determine their benefit-risk profile, and to identify any differences between individual agents. However, given current safety and efficacy data, SGLT2 inhibitors may present an attractive option for T2DM patients who are failing with metformin monotherapy, especially if weight is part of the underlying treatment consideration.", "title": "" }, { "docid": "d0bb735eadd569508827d9a55ff492f5", "text": "The emergence of social media has had a significant impact on how people communicate and socialize. Teens use social media to make and maintain social connections with friends and build their reputation. However, the way of analyzing the characteristics of teens in social media has mostly relied on ethnographic accounts or quantitative analyses with small datasets. This paper shows the possibility of detecting age information in user profiles by using a combination of textual and facial recognition methods and presents a comparative study of 27K teens and adults in Instagram. Our analysis highlights that (1) teens tend to post fewer photos but highly engage in adding more tags to their own photos and receiving more Likes and comments about their photos from others, and (2) to post more selfies and express themselves more than adults, showing a higher sense of self-representation. We demonstrate the application of our novel method that shows clear trends of age differences as well as substantiates previous insights in social media.", "title": "" }, { "docid": "2cd905573be23462b5768e2dcdf8847b", "text": "Identity verification is an increasingly important process in our daily lives. Whether we need to use our own equipment or to prove our identity to third parties in order to use services or gain access to physical places, we are constantly required to declare our identity and prove our claim. Traditional authentication methods fall into two categories: proving that you know something (i.e., password-based authentication) and proving that you own something (i.e., token-based authentication). These methods connect the identity with an alternate and less rich representation, for instance a password, that can be lost, stolen, or shared. A solution to these problems comes from biometric recognition systems. Biometrics offers a natural solution to the authentication problem, as it contributes to the construction of systems that can recognize people by the analysis of their anatomical and/or behavioral characteristics. With biometric systems, the representation of the identity is something that is directly derived from the subject, therefore it has properties that a surrogate representation, like a password or a token, simply cannot have (Jain et al. (2006; 2004); Prabhakar et al. (2003)). The strength of a biometric system is determined mainly by the trait that is used to verify the identity. Plenty of biometric traits have been studied and some of them, like fingerprint, iris and face, are nowadays used in widely deployed systems. Today, one of the most important research directions in the field of biometrics is the characterization of novel biometric traits that can be used in conjunction with other traits, to limit their shortcomings or to enhance their performance. The aim of this chapter is to introduce the reader to the usage of heart sounds for biometric recognition, describing the strengths and the weaknesses of this novel trait and analyzing in detail the methods developed so far and their performance. The usage of heart sounds as physiological biometric traits was first introduced in Beritelli & Serrano (2007), in which the authors proposed and started exploring this idea. Their system is based on the frequency analysis, by means of the Chirp z-Transform (CZT), of the sounds produced by the heart during the closure of the mitral tricuspid valve and during the closure of the aortic pulmonary valve. These sounds, called S1 and S2, are extracted from the input 11", "title": "" }, { "docid": "9070c149fba6467b1c9abd44865ad9f7", "text": "The World Wide Web has intensely evolved a novel way for people to express their views and opinions about different topics, trends and issues. The user-generated content present on different mediums such as internet forums, discussion groups, and blogs serves a concrete and substantial base for decision making in various fields such as advertising, political polls, scientific surveys, market prediction and business intelligence. Sentiment analysis relates to the problem of mining the sentiments from online available data and categorizing the opinion expressed by an author towards a particular entity into at most three preset categories: positive, negative and neutral. In this paper, firstly we present the sentiment analysis process to classify highly unstructured data on Twitter. Secondly, we discuss various techniques to carryout sentiment analysis on Twitter data in detail. Moreover, we present the parametric comparison of the discussed techniques based on our identified parameters.", "title": "" }, { "docid": "c428c35e7bd0a2043df26d5e2995f8eb", "text": "Cryptocurrencies like Bitcoin and the more recent Ethereum system allow users to specify scripts in transactions and contracts to support applications beyond simple cash transactions. In this work, we analyze the extent to which these systems can enforce the correct semantics of scripts. We show that when a script execution requires nontrivial computation effort, practical attacks exist which either waste miners' computational resources or lead miners to accept incorrect script results. These attacks drive miners to an ill-fated choice, which we call the verifier's dilemma, whereby rational miners are well-incentivized to accept unvalidated blockchains. We call the framework of computation through a scriptable cryptocurrency a consensus computer and develop a model that captures incentives for verifying computation in it. We propose a resolution to the verifier's dilemma which incentivizes correct execution of certain applications, including outsourced computation, where scripts require minimal time to verify. Finally we discuss two distinct, practical implementations of our consensus computer in real cryptocurrency networks like Ethereum.", "title": "" }, { "docid": "8b05f1d48e855580a8b0b91f316e89ab", "text": "The demand for improved service delivery requires new approaches and attitudes from local government. Implementation of knowledge sharing practices in local government is one of the critical processes that can help to establish learning organisations. The main purpose of this paper is to investigate how knowledge management systems can be used to improve the knowledge sharing culture among local government employees. The study used an inductive research approach which included a thorough literature review and content analysis. The technology-organisation-environment theory was used as the theoretical foundation of the study. Making use of critical success factors, the study advises how existing knowledge sharing practices can be supported and how new initiatives can be developed, making use of a knowledge management system. The study recommends that local government must ensure that knowledge sharing practices and initiatives are fully supported and promoted by top management.", "title": "" }, { "docid": "922c0a315751c90a11b018547f8027b2", "text": "We propose a model for the recently discovered Θ+ exotic KN resonance as a novel kind of a pentaquark with an unusual color structure: a 3c ud diquark, coupled to 3c uds̄ triquark in a relative P -wave. The state has J P = 1/2+, I = 0 and is an antidecuplet of SU(3)f . A rough mass estimate of this pentaquark is close to experiment.", "title": "" }, { "docid": "c858d0fd00e7cc0d5ee38c49446264f4", "text": "Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research. However, the majority of works aim to adopt and assess methods that have been shown to be effective in other domains, while there is still a great need for more original research focusing on music primarily and utilising musical knowledge and insight. The goal of this paper is to boost the interest of beginners by providing a comprehensive tutorial and reducing the barriers to entry into deep learning for MIR. We lay out the basic principles and review prominent works in this hard to navigate field. We then outline the network structures that have been successful in MIR problems and facilitate the selection of building blocks for the problems at hand. Finally, guidelines for new tasks and some advanced topics in deep learning are discussed to stimulate new research in this fascinating field.", "title": "" }, { "docid": "39be1d73b84872b0ae1d61bbd0fc96f8", "text": "Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.", "title": "" } ]
scidocsrr
96a27f00414afb5d10de8ef79a9dfbc4
Semantic Tagging of Mathematical Expressions
[ { "docid": "83fabef0cead9453d8081f834a08d868", "text": "1. SYSTEM OVERVIEW Researchers working in technical disciplines wishing to search for information related to a particular mathematical expression cannot effectively do so with a text-based search engine unless they know appropriate text keywords. To overcome this difficulty, we demonstrate a math-aware search engine, which extends the capability of existing text search engines to search mathematical content.", "title": "" }, { "docid": "0bf3c08b71fedd629bdc584c3deeaa34", "text": "Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary.", "title": "" } ]
[ { "docid": "adb64a513ab5ddd1455d93fc4b9337e6", "text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.", "title": "" }, { "docid": "86e646b845384d3cfbb146075be5c02a", "text": "Content-Based Image Retrieval (CBIR) has become one of the most active research areas in the past few years. Many visual feature representations have been explored and many systems built. While these research e orts establish the basis of CBIR, the usefulness of the proposed approaches is limited. Speci cally, these e orts have relatively ignored two distinct characteristics of CBIR systems: (1) the gap between high level concepts and low level features; (2) subjectivity of human perception of visual content. This paper proposes a relevance feedback based interactive retrieval approach, which e ectively takes into account the above two characteristics in CBIR. During the retrieval process, the user's high level query and perception subjectivity are captured by dynamically updated weights based on the user's relevance feedback. The experimental results show that the proposed approach greatly reduces the user's e ort of composing a query and captures the user's information need more precisely.", "title": "" }, { "docid": "9ffaf53e8745d1f7f5b7ff58c77602c6", "text": "Background subtraction is a widely used approach for detecting moving objects from static cameras. Many different methods have been proposed over the recent years and both the novice and the expert can be confused about their benefits and limitations. In order to overcome this problem, this paper provides a review of the main methods and an original categorisation based on speed, memory requirements and accuracy. Such a review can effectively guide the designer to select the most suitable method for a given application in a principled way. Methods reviewed include parametric and non-parametric background density estimates and spatial correlation approaches.", "title": "" }, { "docid": "2e7dd876af56a4698d3e79d3aa5f2eff", "text": "Although there are numerous aetiologies for coccygodynia described in the medical literature, precoccygeal epidermal inclusion cyst presenting as a coccygodynia has not been reported. We report a 30-year-old woman with intractable coccygodynia. Magnetic resonance imaging showed a circumscribed precoccygeal cystic lesion. The removed cyst was pearly-white in appearance and contained cheesy material. Histological evaluation established the diagnosis of epidermal inclusion cyst with mild nonspecific inflammation. The patient became asymptomatic and remained so at two years follow-up. This report suggests that precoccygeal epidermal inclusion cyst should be considered as one of the differential diagnosis of coccygodynia. Our experience suggests that patients with intractable coccygodynia should have a magnetic resonance imaging to rule out treatable causes of coccygodynia.", "title": "" }, { "docid": "1c34abb0e212034a5fb96771499f1ee3", "text": "Facial expression recognition is a useful feature in modern human computer interaction (HCI). In order to build efficient and reliable recognition systems, face detection, feature extraction and classification have to be robustly realised. Addressing the latter two issues, this work proposes a new method based on geometric and transient optical flow features and illustrates their comparison and integration for facial expression recognition. In the authors’ method, photogrammetric techniques are used to extract three-dimensional (3-D) features from every image frame, which is regarded as a geometric feature vector. Additionally, optical flow-based motion detection is carried out between consecutive images, what leads to the transient features. Artificial neural network and support vector machine classification results demonstrate the high performance of the proposed method. In particular, through the use of 3-D normalisation and colour information, the proposed method achieves an advanced feature representation for the accurate and robust classification of facial expressions.", "title": "" }, { "docid": "dd06708ab6f67287e213bdb7b4436491", "text": "Here we present the design of a passive-dynamics based, fully autonomous, 3-D, bipedal walking robot that uses simple control, consumes little energy, and has human-like morphology and gait. Design aspects covered here include the freely rotating hip joint with angle bisecting mechanism; freely rotating knee joints with latches; direct actuation of the ankles with a spring, release mechanism, and reset motor; wide feet that are shaped to aid lateral stability; and the simple control algorithm. The biomechanics context of this robot is discussed in more detail in [1], and movies of the robot walking are available at Science Online and http://www.tam.cornell.edu/~ruina/powerwalk.html. This robot adds evidence to the idea that passive-dynamic approaches might help design walking robots that are simpler, more efficient and easier to control.", "title": "" }, { "docid": "966fa8e8eaf66201494633e582e11a31", "text": "This paper describes the development of a noninvasive blood pressure measurement (NIBP) device based on the oscillometric principle. The device is composed of an arm cuff, an air-pumping motor, a solenoid valve, a pressure transducer, and a 2×16 characters LCD display module and a microcontroller which acts as the central controller and processor for the hardware. In the development stage, an auxiliary instrumentation for signal acquisition and digital signal processing using LabVIEW, which is also known as virtual instrument (VI), is incorporated for learning and experimentation purpose. Since the most problematic part of metrological evaluation of an oscillometric NIBP system is in the proprietary algorithms of determining systolic blood pressure (SBP) and diastolic blood pressure (DBP), the amplitude algorithm is used. The VI is a useful tool for studying data acquisition and signal processing to determine SBP and DBP from the maximum of the oscillations envelope. The knowledge from VI procedures is then adopted into a stand alone NIBP device. SBP and DBP are successfully obtained using the circuit developed for the NIBP device. The work done is a proof of design concept that requires further refinement.", "title": "" }, { "docid": "c481baeab2091672c044c889b1179b1f", "text": "Our research is based on an innovative approach that integrates computational thinking and creative thinking in CS1 to improve student learning performance. Referencing Epstein's Generativity Theory, we designed and deployed a suite of creative thinking exercises with linkages to concepts in computer science and computational thinking, with the premise that students can leverage their creative thinking skills to \"unlock\" their understanding of computational thinking. In this paper, we focus on our study on differential impacts of the exercises on different student populations. For all students there was a linear \"dosage effect\" where completion of each additional exercise increased retention of course content. The impacts on course grades, however, were more nuanced. CS majors had a consistent increase for each exercise, while non-majors benefited more from completing at least three exercises. It was also important for freshmen to complete all four exercises. We did find differences between women and men but cannot draw conclusions.", "title": "" }, { "docid": "a67df1737ca4e5cb41fe09ccb57c0e88", "text": "Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bioinspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 48 . Introduction Ever growing energy demand by mankind and the limited availbility of resources remain as a major challenge to the power sector ndustry. The need for renewable energy resources has been augented in large scale and aroused due to its huge availability nd pollution free operation. Among the various renewable energy esources, solar energy has gained worldwide recognition because f its minimal maintenance, zero noise and reliability. Because of he aforementioned advantages; solar energy have been widely sed for various applications, but not limited to, such as megawatt cale power plants, water pumping, solar home systems, commuPlease cite this article in press as: R. Venugopalan, et al., Modified Parti Tracking for uniform and under partial shading condition, Appl. Soft C ication satellites, space vehicles and reverse osmosis plants [1]. owever, power generation using solar energy still remain uncerain, despite of all the efforts, due to various factors such as poor ∗ Corresponding author at: SELECT, VIT University, Vellore, Tamilnadu 632014, ndia. Tel.: +91 9600117935; fax: +91 9490113830. E-mail address: sudhakar.babu2013@vit.ac.in (T. Sudhakarbabu). ttp://dx.doi.org/10.1016/j.asoc.2015.05.029 568-4946/© 2015 Published by Elsevier B.V. 49 50 51 52 conversion efficiency, high installation cost and reduced power output under varying environmental conditions. Further, the characteristics of solar PV are non-linear in nature imposing constraints on solar power generation. Therefore, to maximize the power output from solar PV and to enhance the operating efficiency of the solar photovoltaic system, Maximum Power Point Tracking (MPPT) algorithms are essential [2]. Various MPPT algorithms [3–5] have been investigated and reported in the literature and the most popular ones are Fractional Open Circuit Voltage [6–8], Fractional Short Circuit Current [9–11], Perturb and Observe (P&O) [12–17], Incremental Conductance (Inc. Cond.) [18–22], and Hill Climbing (HC) algorithm [23–26]. In fractional open circuit voltage, and fractional short circuit current method; its performance depends on an approximate linear correlation between Vmpp, Voc and Impp, Isc values. However, the above relation is not practically valid; hence, exact value of Maximum cle Swarm Optimization technique based Maximum Power Point omput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.05.029 Power Point (MPP) cannot be assured. Perturb and Observe (P&O) method works with the voltage perturbation based on present and previous operating power values. Regardless of its simple structure, its efficiency principally depends on the tradeoff between the 53 54 55 56 ARTICLE IN G Model ASOC 2982 1–12 2 R. Venugopalan et al. / Applied Soft C Nomenclature IPV Current source Rs Series resistance Rp Parallel resistance VD diode voltage ID diode current I0 leakage current Vmpp voltage at maximum power point Voc open circuit voltage Impp current at maximum power point Isc short circuit current Vmpn nominal maximum power point voltage at 1000 W/m2 Npp number of parallel PV modules Nss number of series PV modules w weight factor c1 acceleration factor c2 acceleration factor pbest personal best position gbest global best position Vt thermal voltage K Boltzmann constant T temperature q electron charge Ns number of cells in series Vocn nominal open circuit voltage at 1000W/m2 G irradiation Gn nominal Irradiation Kv voltage temperature coefficient dT difference in temperature RLmin minimum value of load at output RLmax maximum value of load at output Rin internal resistance of the PV module RPVmin minimum reflective impedance of PV array RPVmax maximum reflective impedance of PV array R equivalent output load resistance t M o w t b A c M h n ( e a i p p w a u t H o i 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 o b converter efficiency racking speed and the steady state oscillations in the region of PP [15]. Incremental Conductance (Inc. Cond.) algorithm works n the principle of comparing ratios of Incremental Conductance ith instantaneous conductance and it has the similar disadvanage as that of P&O method [20,21]. HC method works alike P&O ut it is based on the perturbation of duty cycle of power converter. ll these traditional methods have the following disadvantages in ommon; reduced efficiency and steady state oscillations around PP. Realizing the above stated drawbacks; various researchers ave worked on applying certain Artificial Intelligence (AI) techiques like Neural Network (NN) [27,28] and Fuzzy Logic Control FLC) [29,30]. However, these techniques require periodic training, normous volume of data for training, computational complexity nd large memory capacity. Application of aforementioned MPPT methods for centralzed/string PV system is limited as they fail to track the global eak power under partial shading conditions. In addition, multile peaks occur in P-V curve under partial shading condition in hich the unique peak point i.e., global power peak should be ttained. However, when conventional MPPT techniques are used nder such conditions, they usually get trapped in any one of Please cite this article in press as: R. Venugopalan, et al., Modified Part Tracking for uniform and under partial shading condition, Appl. Soft C he local power peaks; drastically lowering the search efficiency. ence, to improve MPP tracking efficiency of conventional methds under PS conditions certain modifications have been proposed n Ref. [31]. Some used two stage approach to track the MPP [32]. PRESS omputing xxx (2015) xxx–xxx In the first stage, a wide search is performed which ensures that the operating point is moved closer to the global peak which is further fine-tuned in the second stage to reach the global peak value. Even though tracking efficiency has improved the method still fails to find the global maximum under all conditions. Another interesting approach is improving the Fibonacci search method for global MPP tracking [33]. Alike two stage method, this one also suffers from the same drawback that it does not guarantee accurate MPP tracking under all shaded conditions [34]. Yet another unique formulation combining DIRECT search method with P&O was put forward for global MPP tracking in Ref. [35]. Even though it is rendered effective, it is very complex and increases the computational burden. In the recent past, bio-inspired algorithms like GA, PSO and ACO have drawn considerable researcher’s attention for MPPT application; since they ensure sufficient class of accuracy while dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations [32,36–38]. Further, these methods offer various advantages such as computational simplicity, easy implementation and faster response. Among those methods, PSO method is largely discussed and widely used for solar MPPT due to the fact that it has simple structure, system independency, high adaptability and lesser number of tuning parameters. Further in PSO method, particles are allowed to move in random directions and the best values are evolved based on pbest and gbest values. This exploration process is very suitable for MPPT application. To improve the search efficiency of the conventional PSO method authors have proposed modifications to the existing algorithm. In Ref. [39], the authors have put forward an additional perception capability for the particles in search space so that best solutions are evolved with higher accuracy than PSO. However, details on implementation under partial shading condition are not discussed. Further, this method is only applicable when the entire module receive uniform insolation cannot be considered. Traditional PSO method is modified in Ref. [40] by introducing equations for velocity update and inertia. Even though the method showed better performance, use of extra coefficients in the conventional PSO search limits its advantage and increases the computational burden of the algorithm. Another approach", "title": "" }, { "docid": "253fb54d00d50a407452fff881390ba1", "text": "In this work, we investigate the effects of the cascade architecture of dilated convolutions and the deep network architecture of multi-resolution input images on the accuracy of semantic segmentation. We show that a cascade of dilated convolutions is not only able to efficiently capture larger context without increasing computational costs, but can also improve the localization performance. In addition, the deep network architecture for multi-resolution input images increases the accuracy of semantic segmentation by aggregating multi-scale contextual information. Furthermore, our fully convolutional neural network is coupled with a model of fully connected conditional random fields to further remove isolated false positives and improve the prediction along object boundaries. We present several experiments on two challenging image segmentation datasets, showing substantial improvements over strong baselines.", "title": "" }, { "docid": "d961bd734577dad36588f883e56c3a5d", "text": "Received Jan 5, 2018 Revised Feb 14, 2018 Accepted Feb 28, 2018 This paper proposes Makespan and Reliability based approach, a static sheduling strategy for distributed real time embedded systems that aims to optimize the Makespan and the reliability of an application. This scheduling problem is NP-hard and we rely on a heuristic algorithm to obtain efficiently approximate solutions. Two contributions have to be outlined: First, a hierarchical cooperation between heuristics ensuring to treat alternatively the objectives and second, an Adapatation Module allowing to improve solution exploration by extending the search space. It results a set of compromising solutions offering the designer the possibility to make choices in line with his (her) needs. The method was tested and experimental results are provided.", "title": "" }, { "docid": "6b214fdd60a1a4efe27258c2ab948086", "text": "Ambient Assisted Living (AAL) aims to create innovative technical solutions and services to support independent living among older adults, improve their quality of life and reduce the costs associated with health and social care. AAL systems provide health monitoring through sensor based technologies to preserve health and functional ability and facilitate social support for the ageing population. Human activity recognition (HAR) is an enabler for the development of robust AAL solutions, especially in safety critical environments. Therefore, HAR models applied within this domain (e.g. for fall detection or for providing contextual information to caregivers) need to be accurate to assist in developing reliable support systems. In this paper, we evaluate three machine learning algorithms, namely Support Vector Machine (SVM), a hybrid of Hidden Markov Models (HMM) and SVM (SVM-HMM) and Artificial Neural Networks (ANNs) applied on a dataset collected between the elderly and their caregiver counterparts. Detected activities will later serve as inputs to a bidirectional activity awareness system for increasing social connectedness. Results show high classification performances for all three algorithms. Specifically, the SVM-HMM hybrid demonstrates the best classification performance. In addition to this, we make our dataset publicly available for use by the machine learning community.", "title": "" }, { "docid": "f201e043022b02ecd763e2b6d751d21b", "text": "The paper presents an efficient and reliable approach to automatic people segmentation, tracking and counting, designed for a system with an overhead mounted (zenithal) camera. Upon the initial block-wise background subtraction, k-means clustering is used to enable the segmentation of single persons in the scene. The number of people in the scene is estimated as the maximal number of clusters with acceptable inter-cluster separation. Tracking of segmented people is addressed as a problem of dynamic cluster assignment between two consecutive frames and it is solved in a greedy fashion. Systems for people counting are applied to people surveillance and management and lately within the ambient intelligence solutions. Experimental results suggest that the proposed method is able to achieve very good results in terms of counting accuracy and execution speed.", "title": "" }, { "docid": "e3b91b1133a09d7c57947e2cd85a17c7", "text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.", "title": "" }, { "docid": "b420be5b34185e4604f22b038a605c92", "text": "Computer networks are inherently social networks, linking people, organizations, and knowledge. They are social institutions that should not be studied in isolation but as integrated into everyday lives. The proliferation of computer networks has facilitated a deemphasis on group solidarities at work and in the community and afforded a turn to networked societies that are loosely bounded and sparsely knit. The Internet increases people's social capital, increasing contact with friends and relatives who live nearby and far away. New tools must be developed to help people navigate and find knowledge in complex, fragmented, networked societies.", "title": "" }, { "docid": "29414157d8054f80db977a2b90992a23", "text": "Scatter search and its generalized form called path relinking are evolutionary methods that have recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, these methods use strategies for combining solution vectors that have proved effective for scheduling, routing, financial product design, neural network training, optimizing simulation and a variety of other problem areas. These approaches can be implemented in multiple ways, and offer numerous alternatives for exploiting their basic ideas. We identify a template for scatter search and path relinking methods that provides a convenient and \"user friendly\" basis for their implementation. The overall design can be summarized by a small number of key steps, leading to versions of scatter search and path relinking that are fully specified upon providing a handful of subroutines. Illustrative forms of these subroutines are described that make it possible to create methods for a wide range of optimization problems. Highlights of these components include new diversification generators for zero-one and permutation problems (extended by a mapping-byobjective technique that handles additional classes of problems), together with processes to avoid generating or incorporating duplicate solutions at various stages (related to the avoidance of cycling in tabu search) and a new method for creating improved solutions. *** UPDATED AND EXTENDED: February 1998 *** Previous version appeared in Lecture Notes in Computer Science, 1363, J.K. Hao, E. Lutton, E. Ronald, M. Schoenauer, D. Snyers (Eds.), 13-54, 1997. This research was supported in part by the Air Force Office of Scientific Research Grant #F49620-97-1-0271.", "title": "" }, { "docid": "d7780a122b51adc30f08eeb13af78bd1", "text": "Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the \"wear and tear\" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system images that exhibit a realistic wear-and-tear state.", "title": "" }, { "docid": "efed670ac36ee6f4e084755b4b408467", "text": "In a variety of problem domains, it has been observed that the aggregate opinions of groups are often more accurate than those of the constituent individuals, a phenomenon that has been termed the \"wisdom of the crowd.\" Yet, perhaps surprisingly, there is still little consensus on how generally the phenomenon holds, how best to aggregate crowd judgements, and how social influence affects estimates. We investigate these questions by taking a meta wisdom of crowds approach. With a distributed team of over 100 student researchers across 17 institutions in the United States and India, we develop a large-scale online experiment to systematically study the wisdom of crowds effect for 1,000 different tasks in 50 subject domains. These tasks involve various types of knowledge (e.g., explicit knowledge, tacit knowledge, and prediction), question formats (e.g., multiple choice and point estimation), and inputs (e.g., text, audio, and video). To examine the effect of social influence, participants are randomly assigned to one of three different experiment conditions in which they see varying degrees of information on the responses of others. In this ongoing project, we are now preparing to recruit participants via Amazon?s Mechanical Turk.", "title": "" }, { "docid": "9409922d01a00695745939b47e6446a0", "text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.", "title": "" }, { "docid": "8d7467bf868d3a75821aa8f4f7513312", "text": "Search on PCs has become less efficient than searching the Web due to the increasing amount of stored data. In this paper we present an innovative Desktop search solution, which relies on extracted metadata, context information as well as additional background information for improving Desktop search results. We also present a practical application of this approach — the extensible Beagle toolbox. To prove the validity of our approach, we conducted a series of experiments. By comparing our results against the ones of a regular Desktop search solution — Beagle — we show an improved quality in search and overall performance.", "title": "" } ]
scidocsrr
88133fc009fd35f3a7f47df4ce9ad01c
Design Activity Framework for Visualization Design
[ { "docid": "ed4dcf690914d0a16d2017409713ea5f", "text": "We argue that HCI has emerged as a design-oriented field of research, directed at large towards innovation, design, and construction of new kinds of information and interaction technology. But the understanding of such an attitude to research in terms of philosophical, theoretical, and methodological underpinnings seems however relatively poor within the field. This paper intends to specifically address what design 'is' and how it is related to HCI. First, three candidate accounts from design theory of what design 'is' are introduced; the conservative, the romantic, and the pragmatic. By examining the role of sketching in design, it is found that the designer becomes involved in a necessary dialogue, from which the design problem and its solution are worked out simultaneously as a closely coupled pair. In conclusion, it is proposed that we need to acknowledge, first, the role of design in HCI conduct, and second, the difference between the knowledge-generating Design-oriented Research and the artifact-generating conduct of Research-oriented Design.", "title": "" } ]
[ { "docid": "aa2b1a8d0cf511d5862f56b47d19bc6a", "text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:", "title": "" }, { "docid": "381ce2a247bfef93c67a3c3937a29b5a", "text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.", "title": "" }, { "docid": "ccf8e1f627af3fe1327a4fa73ac12125", "text": "One of the most common needs in manufacturing plants is rejecting products not coincident with the standards as anomalies. Accurate and automatic anomaly detection improves product reliability and reduces inspection cost. Probabilistic models have been employed to detect test samples with lower likelihoods as anomalies in unsupervised manner. Recently, a probabilistic model called deep generative model (DGM) has been proposed for end-to-end modeling of natural images and already achieved a certain success. However, anomaly detection of machine components with complicated structures is still challenging because they produce a wide variety of normal image patches with low likelihoods. For overcoming this difficulty, we propose unregularized score for the DGM. As its name implies, the unregularized score is the anomaly score of the DGM without the regularization terms. The unregularized score is robust to the inherent complexity of a sample and has a smaller risk of rejecting a sample appearing less frequently but being coincident with the standards.", "title": "" }, { "docid": "ca29896e6adcd09ebcb6456d1b7678fe", "text": "Causation looms large in legal and moral reasoning. People construct causal models of the social and physical world to understand what has happened, how and why, and to allocate responsibility and blame. This chapter explores people’s commonsense notion of causation, and shows how it underpins moral and legal judgments. As a guiding framework it uses the causal model framework (Pearl, 2000) rooted in structural models and counterfactuals, and shows how it can resolve many of the problems that beset standard butfor analyses. It argues that legal concepts of causation are closely related to everyday causal reasoning, and both are tailored to the practical concerns of responsibility attribution. Causal models are also critical when people evaluate evidence, both in terms of the stories they tell to make sense of evidence, and the methods they use to assess its credibility and reliability.", "title": "" }, { "docid": "64de73be55c4b594934b0d1bd6f47183", "text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.", "title": "" }, { "docid": "ba6873627b976fa1a3899303b40eae3c", "text": "Most plant seeds are dispersed in a dry, mature state. If these seeds are non-dormant and the environmental conditions are favourable, they will pass through the complex process of germination. In this review, recent progress made with state-of-the-art techniques including genome-wide gene expression analyses that provided deeper insight into the early phase of seed germination, which includes imbibition and the subsequent plateau phase of water uptake in which metabolism is reactivated, is summarized. The physiological state of a seed is determined, at least in part, by the stored mRNAs that are translated upon imbibition. Very early upon imbibition massive transcriptome changes occur, which are regulated by ambient temperature, light conditions, and plant hormones. The hormones abscisic acid and gibberellins play a major role in regulating early seed germination. The early germination phase of Arabidopsis thaliana culminates in testa rupture, which is followed by the late germination phase and endosperm rupture. An integrated view on the early phase of seed germination is provided and it is shown that it is characterized by dynamic biomechanical changes together with very early alterations in transcript, protein, and hormone levels that set the stage for the later events. Early seed germination thereby contributes to seed and seedling performance important for plant establishment in the natural and agricultural ecosystem.", "title": "" }, { "docid": "0f56b99bc1d2c9452786c05242c89150", "text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.", "title": "" }, { "docid": "0b9dd779ada0ed128c95822f647e0a00", "text": "In this paper, we propose a bigram based supervised method for extractive document summarization in the integer linear programming (ILP) framework. For each bigram, a regression model is used to estimate its frequency in the reference summary. The regression model uses a variety of indicative features and is trained discriminatively to minimize the distance between the estimated and the ground truth bigram frequency in the reference summary. During testing, the sentence selection problem is formulated as an ILP problem to maximize the bigram gains. We demonstrate that our system consistently outperforms the previous ILP method on different TAC data sets, and performs competitively compared to the best results in the TAC evaluations. We also conducted various analysis to show the impact of bigram selection, weight estimation, and ILP setup.", "title": "" }, { "docid": "e5667a65bc628b93a1d5b0e37bfb8694", "text": "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving things rather than stuff. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.", "title": "" }, { "docid": "e0fc5dabbc57100a1c726703e82be706", "text": "In this paper, we examined the effects of financial news on Ho Chi Minh Stock Exchange (HoSE) and we tried to predict the direction of VN30 Index after the news articles were published. In order to do this study, we got news articles from three big financial websites and we represented them as feature vectors. Recently, researchers have used machine learning technique to integrate with financial news in their prediction model. Actually, news articles are important factor that influences investors in a quick way so it is worth considering the news impact on predicting the stock market trends. Previous works focused only on market news or on the analysis of the stock quotes in the past to predict the stock market behavior in the future. We aim to build a stock trend prediction model using both stock news and stock prices of VN30 index that will be applied in Vietnam stock market while there has been a little focus on using news articles to predict the stock direction. Experiment results show that our proposed method achieved high accuracy in VN30 index trend prediction.", "title": "" }, { "docid": "1f0dbec4f21549780d25aa81401494c6", "text": "Parallel scientific applications require high-performanc e I/O support from underlying file systems. A comprehensive understanding of the expected workload is t herefore essential for the design of high-performance parallel file systems. We re-examine the w orkload characteristics in parallel computing environments in the light of recent technology ad vances and new applications. We analyze application traces from a cluster with hundreds o f nodes. On average, each application has only one or two typical request sizes. Large requests fro m several hundred kilobytes to several megabytes are very common. Although in some applications, s mall requests account for more than 90% of all requests, almost all of the I/O data are transferre d by large requests. All of these applications show bursty access patterns. More than 65% of write req uests have inter-arrival times within one millisecond in most applications. By running the same be nchmark on different file models, we also find that the write throughput of using an individual out p t file for each node exceeds that of using a shared file for all nodes by a factor of 5. This indicate s that current file systems are not well optimized for file sharing.", "title": "" }, { "docid": "b622c27ba400e349d2b1ad40c7fc90e1", "text": "In this work we examine the feasibility of quantitatively characterizing some aspects of security. In particular, we investigate if it is possible to predict the number of vulnerabilities that can potentially be present in a software system but may not have been found yet. We use several major operating systems as representatives of complex software systems. The data on vulnerabilities discovered in these systems are analyzed. We examine the results to determine if the density of vulnerabilities in a program is a useful measure. We also address the question about what fraction of software defects are security related, i.e., are vulnerabilities. We examine the dynamics of vulnerability discovery hypothesizing that it may lead us to an estimate of the magnitude of the undiscovered vulnerabilities still present in the system. We consider the vulnerability discovery rate to see if models can be developed to project future trends. Finally, we use the data for both commercial and opensource systems to determine whether the key observations are generally applicable. Our results indicate that the values of vulnerability densities fall within a range of values, just like the commonly used measure of defect density for general defects. Our examination also reveals that it is possible to model the vulnerability discovery using a logistic model that can sometimes be approximated by a linear model. a 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6386c0ef0d7cc5c33e379d9c4c2ca019", "text": "BACKGROUND\nEven after negative sentinel lymph node biopsy (SLNB) for primary melanoma, patients who develop in-transit (IT) melanoma or local recurrences (LR) can have subclinical regional lymph node involvement.\n\n\nSTUDY DESIGN\nA prospective database identified 33 patients with IT melanoma/LR who underwent technetium 99m sulfur colloid lymphoscintigraphy alone (n = 15) or in conjunction with lymphazurin dye (n = 18) administered only if the IT melanoma/LR was concurrently excised.\n\n\nRESULTS\nSeventy-nine percent (26 of 33) of patients undergoing SLNB in this study had earlier removal of lymph nodes in the same lymph node basin as the expected drainage of the IT melanoma or LR at the time of diagnosis of their primary melanoma. Lymphoscintography at time of presentation with IT melanoma/LR was successful in 94% (31 of 33) cases, and at least 1 sentinel lymph node was found intraoperatively in 97% (30 of 31) cases. The SLNB was positive in 33% (10 of 30) of these cases. Completion lymph node dissection was performed in 90% (9 of 10) of patients. Nine patients with negative SLNB and IT melanoma underwent regional chemotherapy. Patients in this study with a positive sentinel lymph node at the time the IT/LR was mapped had a considerably shorter time to development of distant metastatic disease compared with those with negative sentinel lymph nodes.\n\n\nCONCLUSIONS\nIn this study, we demonstrate the technical feasibility and clinical use of repeat SLNB for recurrent melanoma. Performing SLNB cannot only optimize local, regional, and systemic treatment strategies for patients with LR or IT melanoma, but also appears to provide important prognostic information.", "title": "" }, { "docid": "2838311a22810aa2b5e0747e06d87a9b", "text": "To build a fashion recommendation system, we need to help users retrieve fashionable items that are visually similar to a particular query, for reasons ranging from searching alternatives (i.e., substitutes), to generating stylish outfits that are visually consistent, among other applications. In domains like clothing and accessories, such considerations are particularly paramount as the visual appearance of items is a critical feature that guides users’ decisions. However, existing systems like Amazon and eBay still rely mainly on keyword search and recommending loosely consistent items (e.g. based on co-purchasing or browsing data), without an interface that makes use of visual information to serve the above needs. In this paper, we attempt to fill this gap by designing and implementing an image-based query system, called Fashionista, which provides a graphical interface to help users efficiently explore those items that are not only visually similar to a given query, but which are also fashionable, as determined by visually-aware recommendation approaches. Methodologically, Fashionista learns a low-dimensional visual space as well as the evolution of fashion trends from large corpora of binary feedback data such as purchase histories of Women’s Clothing & Accessories from Amazon, which we use for this demonstration.", "title": "" }, { "docid": "775969c0c6ad9224cdc9b73706cb5b4f", "text": "This paper discusses how hot carrier injection (HCI) can be exploited to create a trojan that will cause hardware failures. The trojan is produced not via additional logic circuitry but by controlled scenarios that maximize and accelerate the HCI effect in transistors. These scenarios range from manipulating the manufacturing process to varying the internal voltage distribution. This new type of trojan is difficult to test due to its gradual hardware degradation mechanism. This paper describes the HCI effect, detection techniques and discusses the possibility for maliciously induced HCI trojans.", "title": "" }, { "docid": "47f1d6df5ec3ff30d747fb1fcbc271a7", "text": "a r t i c l e i n f o Experimental studies routinely show that participants who play a violent game are more aggressive immediately following game play than participants who play a nonviolent game. The underlying assumption is that nonviolent games have no effect on aggression, whereas violent games increase it. The current studies demonstrate that, although violent game exposure increases aggression, nonviolent video game exposure decreases aggressive thoughts and feelings (Exp 1) and aggressive behavior (Exp 2). When participants assessed after a delay were compared to those measured immediately following game play, violent game players showed decreased aggressive thoughts, feelings and behavior, whereas nonviolent game players showed increases in these outcomes. Experiment 3 extended these findings by showing that exposure to nonviolent puzzle-solving games with no expressly prosocial content increases prosocial thoughts, relative to both violent game exposure and, on some measures, a no-game control condition. Implications of these findings for models of media effects are discussed. A major development in mass media over the last 25 years has been the advent and rapid growth of the video game industry. From the earliest arcade-based console games, video games have been immediately and immensely popular, particularly among young people and their subsequent introduction to the home market only served to further elevate their prevalence (Gentile, 2009). Given their popularity, social scientists have been concerned with the potential effects of video games on those who play them, focusing particularly on games with violent content. While a large percentage of games have always involved the destruction of enemies, recent advances in technology have enabled games to become steadily more realistic. Coupled with an increase in the number of adult players, these advances have enabled the development of games involving more and more graphic violence. Over the past several years, the majority of best-selling games have involved frequent and explicit acts of violence as a central gameplay theme (Smith, Lachlan, & Tamborini, 2003). A video game is essentially a simulated experience. Virtually every major theory of human aggression, including social learning theory, predicts that repeated simulation of antisocial behavior will produce an increase in antisocial behavior (e.g., aggression) and a decrease in prosocial behavior (e.g., helping) outside the simulated environment (i.e., in \" real life \"). In addition, an increase in the perceived realism of the simulation is posited to increase the strength of negative effects (Gentile & Anderson, 2003). Meta-analyses …", "title": "" }, { "docid": "72a5db33e2ba44880b3801987b399c3d", "text": "Over the last decade, the ever increasing world-wide demand for early detection of breast cancer at many screening sites and hospitals has resulted in the need of new research avenues. According to the World Health Organization (WHO), an early detection of cancer greatly increases the chances of taking the right decision on a successful treatment plan. The Computer-Aided Diagnosis (CAD) systems are applied widely in the detection and differential diagnosis of many different kinds of abnormalities. Therefore, improving the accuracy of a CAD system has become one of the major research areas. In this paper, a CAD scheme for detection of breast cancer has been developed using deep belief network unsupervised path followed by back propagation supervised path. The construction is back-propagation neural network with Liebenberg Marquardt learning function while weights are initialized from the deep belief network path (DBN-NN). Our technique was tested on the Wisconsin Breast Cancer Dataset (WBCD). The classifier complex gives an accuracy of 99.68% indicating promising results over previously-published studies. The proposed system provides an effective classification model for breast cancer. In addition, we examined the architecture at several train-test partitions. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "318a4af201ed3563443dcbe89c90b6b4", "text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation", "title": "" }, { "docid": "12be3f9c1f02ad3f26462ab841a80165", "text": "Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).", "title": "" }, { "docid": "0b9ae0bf6f6201249756d87a56f0005e", "text": "To reduce energy consumption and wastage, effective energy management at home is key and an integral part of the future Smart Grid. In this paper, we present the design and implementation of Green Home Service (GHS) for home energy management. Our approach addresses the key issues of home energy management in Smart Grid: a holistic management solution, improved device manageability, and an enabler of Demand-Response. We also present the scheduling algorithms in GHS for smart energy management and show the results in simulation studies.", "title": "" } ]
scidocsrr
f7cb103256c4ec70f6f8ddd54df67bd5
The software value map - an exhaustive collection of value aspects for the development of software intensive products
[ { "docid": "48fc7aabdd36ada053ebc2d2a1c795ae", "text": "The Value-Based Software Engineering (VBSE) agenda described in the preceding article has the objectives of integrating value considerations into current and emerging software engineering principles and practices, and of developing an overall framework in which they compatibly reinforce each other. In this paper, we provide a case study illustrating some of the key VBSE practices, and focusing on a particular anomaly in the monitoring and control area: the \"Earned Value Management System.\" This is a most useful technique for monitoring and controlling the cost, schedule, and progress of a complex project. But it has absolutely nothing to say about the stakeholder value of the system being developed. The paper introduces an example order-processing software project, and shows how the use of Benefits Realization Analysis, stake-holder value proposition elicitation and reconciliation, and business case analysis provides a framework for stakeholder-earned-value monitoring and control.", "title": "" }, { "docid": "d362b36e0c971c43856a07b7af9055f3", "text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,", "title": "" } ]
[ { "docid": "8b067b1115d4bc7c8656564bc6963d7b", "text": "Sentence Function: Indicating the conversational purpose of speakers • Interrogative: Acquire further information from the user • Imperative: Make requests, instructions or invitations to elicit further information • Declarative: Make statements to state or explain something Response Generation Task with Specified Sentence Function • Global Control: Plan different types of words globally • Compatibility: Controllable sentence function + informative content", "title": "" }, { "docid": "06e58f46c989f22037f443ccf38198ce", "text": "Many biological surfaces in both the plant and animal kingdom possess unusual structural features at the micro- and nanometre-scale that control their interaction with water and hence wettability. An intriguing example is provided by desert beetles, which use micrometre-sized patterns of hydrophobic and hydrophilic regions on their backs to capture water from humid air. As anyone who has admired spider webs adorned with dew drops will appreciate, spider silk is also capable of efficiently collecting water from air. Here we show that the water-collecting ability of the capture silk of the cribellate spider Uloborus walckenaerius is the result of a unique fibre structure that forms after wetting, with the ‘wet-rebuilt’ fibres characterized by periodic spindle-knots made of random nanofibrils and separated by joints made of aligned nanofibrils. These structural features result in a surface energy gradient between the spindle-knots and the joints and also in a difference in Laplace pressure, with both factors acting together to achieve continuous condensation and directional collection of water drops around spindle-knots. Submillimetre-sized liquid drops have been driven by surface energy gradients or a difference in Laplace pressure, but until now neither force on its own has been used to overcome the larger hysteresis effects that make the movement of micrometre-sized drops more difficult. By tapping into both driving forces, spider silk achieves this task. Inspired by this finding, we designed artificial fibres that mimic the structural features of silk and exhibit its directional water-collecting ability.", "title": "" }, { "docid": "79c2623b0e1b51a216fffbc6bbecd9ec", "text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.", "title": "" }, { "docid": "e5f50bc18cefc486ead4b92f9df178dc", "text": "Mobile users of computation and communication services have been rapidly adopting battery-powered mobile handhelds, such as PocketPCs and SmartPhones, for their work. However, the limited battery-lifetime of these devices restricts their portability and applicability, and this weakness can be exacerbated by mobile malware targeting depletion of battery energy. Such malware are usually difficult to detect and prevent, and frequent outbreaks of new malware variants also reduce the effectiveness of commonly-seen signature-based detection. To alleviate these problems, we propose a power-aware malware-detection framework that monitors, detects, and analyzes previously unknown energy-depletion threats. The framework is composed of (1) a power monitor which collects power samples and builds a power consumption history from the collected samples, and (2) a data analyzer which generates a power signature from the constructed history. To generate a power signature, simple and effective noise-filtering and data-compression are applied, thus reducing the detection overhead. Similarities between power signatures are measured by the χ2-distance, reducing both false-positive and false-negative detection rates. According to our experimental results on an HP iPAQ running a Windows Mobile OS, the proposed framework achieves significant (up to 95%) storage-savings without losing the detection accuracy, and a 99% true-positive rate in classifying mobile malware.", "title": "" }, { "docid": "9637537d6aeb6545d59eefaaaf2bdafa", "text": "The swing-up maneuver of the double pendulum on a cart serves to demonstrate a new approach of inversion-based feedforward control design introduced recently. The concept treats the transition task as a nonlinear two-point boundary value problem of the internal dynamics by providing free parameters in the desired output trajectory for the cart position. A feedback control is designed with linear methods to stabilize the swing-up maneuver. The emphasis of the paper is on the experimental realization of the double pendulum swing-up, which reveals the accuracy of the feedforward/feedback control scheme. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4b2a16c023937db4f417d52b070cc2cc", "text": "Endosomal protein trafficking is an essential cellular process that is deregulated in several diseases and targeted by pathogens. Here, we describe a role for ubiquitination in this process. We find that the E3 RING ubiquitin ligase, MAGE-L2-TRIM27, localizes to endosomes through interactions with the retromer complex. Knockdown of MAGE-L2-TRIM27 or the Ube2O E2 ubiquitin-conjugating enzyme significantly impaired retromer-mediated transport. We further demonstrate that MAGE-L2-TRIM27 ubiquitin ligase activity is required for nucleation of endosomal F-actin by the WASH regulatory complex, a known regulator of retromer-mediated transport. Mechanistic studies showed that MAGE-L2-TRIM27 facilitates K63-linked ubiquitination of WASH K220. Significantly, disruption of WASH ubiquitination impaired endosomal F-actin nucleation and retromer-dependent transport. These findings provide a cellular and molecular function for MAGE-L2-TRIM27 in retrograde transport, including an unappreciated role of K63-linked ubiquitination and identification of an activating signal of the WASH regulatory complex.", "title": "" }, { "docid": "8986de609f238e83623c7130a9ab9253", "text": "The color psychology literature has made a convincing case that color is not just about aesthetics, but also about meaning. This work has involved situational manipulations of color, rendering it uncertain as to whether color-meaning associations can be used to characterize how people differ from each other. The present research focuses on the idea that the color red is linked to, or associated with, individual differences in interpersonal hostility. Across four studies (N = 376 undergraduates), red preferences and perceptual biases were measured along with individual differences in interpersonal hostility. It was found that (a) a preference for the color red was higher as interpersonal hostility increased, (b) hostile people were biased to see the color red more frequently than nonhostile people, and (c) there was a relationship between a preference for the color red and hostile social decision making. These studies represent an important extension of the color psychology literature, highlighting the need to attend to person-based, as well as situation-based, factors.", "title": "" }, { "docid": "6dfb62138ad7e0c23826a2c6b7c2507e", "text": "End-to-end speech recognition systems have been successfully designed for English. Taking into account the distinctive characteristics between Chinese Mandarin and English, it is worthy to do some additional work to transfer these approaches to Chinese. In this paper, we attempt to build a Chinese speech recognition system using end-to-end learning method. The system is based on a combination of deep Long Short-Term Memory Projected (LSTMP) network architecture and the Connectionist Temporal Classification objective function (CTC). The Chinese characters (the number is about 6,000) are used as the output labels directly. To integrate language model information during decoding, the CTC Beam Search method is adopted and optimized to make it more effective and more efficient. We present the first-pass decoding results which are obtained by decoding from scratch using CTC-trained network and language model. Although these results are not as good as the performance of DNN-HMMs hybrid system, they indicate that it is feasible to choose Chinese characters as the output alphabet in the end-toend speech recognition system.", "title": "" }, { "docid": "bbe43ff06e30a5cf2e9477a60c0bb6ff", "text": "As the Internet of Things (IoT) paradigm gains popularity, the next few years will likely witness 'servitization' of domain sensing functionalities. We envision a cloud-based eco-system in which high quality data from large numbers of independently-managed sensors is shared or even traded in real-time. Such an eco-system will necessarily have multiple stakeholders such as sensor data providers, domain applications that utilize sensor data (data consumers), and cloud infrastructure providers who may collaborate as well as compete. While there has been considerable research on wireless sensor networks, the challenges involved in building cloud-based platforms for hosting sensor services are largely unexplored. In this paper, we present our vision for data quality (DQ)-centric big data infrastructure for federated sensor service clouds. We first motivate our work by providing real-world examples. We outline the key features that federated sensor service clouds need to possess. This paper proposes a big data architecture in which DQ is pervasive throughout the platform. Our architecture includes a markup language called SDQ-ML for describing sensor services as well as for domain applications to express their sensor feed requirements. The paper explores the advantages and limitations of current big data technologies in building various components of the platform. We also outline our initial ideas towards addressing the limitations.", "title": "" }, { "docid": "f12de00e1b3fc390d197aabd41a64f87", "text": "Wireless ad hoc sensor networks have emerged as one of the key growth areas for wireless networking and computing technologies. So far these networks/systems have been designed with static and custom architectures for specific tasks, thus providing inflexible operation and interaction capabilities. Our vision is to create sensor networks that are open to multiple transient users with dynamic needs. Working towards this vision, we propose a framework to define and support lightweight and mobile control scripts that allow the computation, communication, and sensing resources at the sensor nodes to be efficiently harnessed in an application-specific fashion. The replication/migration of such scripts in several sensor nodes allows the dynamic deployment of distributed algorithms into the network. Our framework, SensorWare, defines, creates, dynamically deploys, and supports such scripts. Our implementation of SensorWare occupies less than 180Kbytes of code memory and thus easily fits into several sensor node platforms. Extensive delay measurements on our iPAQ-based prototype sensor node platform reveal the small overhead of SensorWare to the algorithms (less than 0.3msec in most high-level operations). In return the programmer of the sensor network receives compactness of code, abstraction services for all of the node's modules, and in-built multi-user support. SensorWare with its features apart from making dynamic programming possible it also makes it easy and efficient without restricting the expressiveness of the algorithms.", "title": "" }, { "docid": "d6b213889ba6073b0987852e31b98c6a", "text": "Nowadays, large volumes of multimedia data are outsourced to the cloud to better serve mobile applications. Along with this trend, highly correlated datasets can occur commonly, where the rich information buried in correlated data is useful for many cloud data generation/dissemination services. In light of this, we propose to enable a secure and efficient cloud-assisted image sharing architecture for mobile devices, by leveraging outsourced encrypted image datasets with privacy assurance. Different from traditional image sharing, we aim to provide a mobile-friendly design that saves the transmission cost for mobile clients, by directly utilizing outsourced correlated images to reproduce the image of interest inside the cloud for immediate dissemination. First, we propose a secure and efficient index design that allows the mobile client to securely find from encrypted image datasets the candidate selection pertaining to the image of interest for sharing. We then design two specialized encryption mechanisms that support secure image reproduction from encrypted candidate selection. We formally analyze the security strength of the design. Our experiments explicitly show that both the bandwidth and energy consumptions at the mobile client can be saved, while achieving all service requirements and security guarantees.", "title": "" }, { "docid": "ecb06a681f7d14fc690376b4c5a630af", "text": "Diverse proprietary network appliances increase both the capital and operational expense of service providers, meanwhile causing problems of network ossification. Network function virtualization (NFV) is proposed to address these issues by implementing network functions as pure software on commodity and general hardware. NFV allows flexible provisioning, deployment, and centralized management of virtual network functions. Integrated with SDN, the software-defined NFV architecture further offers agile traffic steering and joint optimization of network functions and resources. This architecture benefits a wide range of applications (e.g., service chaining) and is becoming the dominant form of NFV. In this survey, we present a thorough investigation of the development of NFV under the software-defined NFV architecture, with an emphasis on service chaining as its application. We first introduce the software-defined NFV architecture as the state of the art of NFV and present relationships between NFV and SDN. Then, we provide a historic view of the involvement from middlebox to NFV. Finally, we introduce significant challenges and relevant solutions of NFV, and discuss its future research directions by different application domains.", "title": "" }, { "docid": "5589615ee24bf5ba1ac5def2c5bc556e", "text": "The computer industry is at a major inflection point in its hardware roadmap due to the end of a decades-long trend of exponentially increasing clock frequencies. Instead, future computer systems are expected to be built using homogeneous and heterogeneous many-core processors with 10’s to 100’s of cores per chip, and complex hardware designs to address the challenges of concurrency, energy efficiency and resiliency. Unlike previous generations of hardware evolution, this shift towards many-core computing will have a profound impact on software. These software challenges are further compounded by the need to enable parallelism in workloads and application domains that traditionally did not have to worry about multiprocessor parallelism in the past. A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs. Unfortunately, hybrid programming models that support multithreaded execution on CPUs in parallel with CUDA execution on GPUs prove to be too complex for use by mainstream programmers and domain experts, especially when targeting platforms with multiple CPU cores and multiple GPU devices. In this paper, we extend past work on Intel’s Concurrent Collections (CnC) programming model to address the hybrid programming challenge using a model called CnC-CUDA. CnC is a declarative and implicitly parallel coordination language that supports flexible combinations of task and data parallelism while retaining determinism. CnC computations are built using steps that are related by data and control dependence edges, which are represented by a CnC graph. The CnC-CUDA extensions in this paper include the definition of multithreaded steps for execution on GPUs, and automatic generation of data and control flow between CPU steps and GPU steps. Experimental results show that this approach can yield significant performance benefits with both GPU execution and hybrid CPU/GPU execution.", "title": "" }, { "docid": "bb2c01181664baaf20012e321b5e1f9f", "text": "Systems able to suggest items that a user may be interested in are usually named as Recommender Systems. The new emergent field of Recommender Systems has undoubtedly gained much interest in the research community. Although Recommender Systems work well in suggesting books, movies and items of general interest, many users express today a feeling that the existing systems don’t actually identify them as individual personalities. This dissatisfaction turned the research society towards the development of new approaches on Recommender Systems, more user-centric. A methodology originated from Decision Theory is exploited herein, aiming to address to the lack of personalization in Recommender Systems by integrating the user in the recommendation process.", "title": "" }, { "docid": "f70f825996544350b21177246cb39803", "text": "The goal of our work is to develop an efficient, automatic algorithm for discovering point correspondences between surfaces that are approximately and/or partially isometric.\n Our approach is based on three observations. First, isometries are a subset of the Möbius group, which has low-dimensionality -- six degrees of freedom for topological spheres, and three for topological discs. Second, computing the Möbius transformation that interpolates any three points can be computed in closed-form after a mid-edge flattening to the complex plane. Third, deviations from isometry can be modeled by a transportation-type distance between corresponding points in that plane.\n Motivated by these observations, we have developed a Möbius Voting algorithm that iteratively: 1) samples a triplet of three random points from each of two point sets, 2) uses the Möbius transformations defined by those triplets to map both point sets into a canonical coordinate frame on the complex plane, and 3) produces \"votes\" for predicted correspondences between the mutually closest points with magnitude representing their estimated deviation from isometry. The result of this process is a fuzzy correspondence matrix, which is converted to a permutation matrix with simple matrix operations and output as a discrete set of point correspondences with confidence values.\n The main advantage of this algorithm is that it can find intrinsic point correspondences in cases of extreme deformation. During experiments with a variety of data sets, we find that it is able to find dozens of point correspondences between different object types in different poses fully automatically.", "title": "" }, { "docid": "d6628b102e8f87e8ce58c2e3483a7beb", "text": "Nowadays, Big Data platforms allow the analysis of massive data streams in an efficient way. However, the services they provide are often too raw, thus the implementation of advanced real-world applications requires a non-negligible effort for interfacing with such services. This also complicates the task of choosing which one of the many available alternatives is the most appropriate for the application at hand. In this paper, we present a comparative study of the three major opensource Big Data platforms for stream processing, as performed by using our novel RAMS framework. Although the results we present are specific for our use case (recognition of suspect people from massive video streams), the generality of the RAMS framework allows both considering such results as valid for similar applications and implementing different use cases on top of Big Data platforms with very limited effort.", "title": "" }, { "docid": "c0484f3055d7e7db8dfea9d4483e1e06", "text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.", "title": "" }, { "docid": "fb1a178c7c097fbbf0921dcef915dc55", "text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.", "title": "" }, { "docid": "ca2d9b2fe08cda70aa37410aa30e2f2a", "text": "3D human pose estimation from a single image is a challenging problem, especially for in-the-wild settings due to the lack of 3D annotated data. We propose two anatomically inspired loss functions and use them with the weaklysupervised learning framework of [41] to jointly learn from large-scale in-thewild 2D and indoor/synthetic 3D data. We also present a simple temporal network that exploits temporal and structural cues present in predicted pose sequences to temporally harmonize the pose estimations. We carefully analyze the proposed contributions through loss surface visualizations and sensitivity analysis to facilitate deeper understanding of their working mechanism. Our complete pipeline improves the state-of-the-art by 11.8% and 12% on Human3.6M and MPI-INF3DHP, respectively, and runs at 30 FPS on a commodity graphics card.", "title": "" } ]
scidocsrr
e9ca5c76db76105bcbde6adade74d2d9
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
[ { "docid": "07ef2766f22ac6c5b298e3f833cd88b5", "text": "A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.", "title": "" }, { "docid": "4f511a669a510153aa233d90da4e406a", "text": "In many visual surveillance applications the task of person detection and localization can be solved easier by using thermal long-wave infrared (LWIR) cameras which are less affected by changing illumination or background texture than visual-optical cameras. Especially in outdoor scenes where usually only few hot spots appear in thermal infrared imagery, humans can be detected more reliably due to their prominent infrared signature. We propose a two-stage person recognition approach for LWIR images: (1) the application of Maximally Stable Extremal Regions (MSER) to detect hot spots instead of background subtraction or sliding window and (2) the verification of the detected hot spots using a Discrete Cosine Transform (DCT) based descriptor and a modified Random Naïve Bayes (RNB) classifier. The main contributions are the novel modified RNB classifier and the generality of our method. We achieve high detection rates for several different LWIR datasets with low resolution videos in real-time. While many papers in this topic are dealing with strong constraints such as considering only one dataset, assuming a stationary camera, or detecting only moving persons, we aim at avoiding such constraints to make our approach applicable with moving platforms such as Unmanned Ground Vehicles (UGV).", "title": "" } ]
[ { "docid": "9c707afc8a0312ebab0ebd1b7fcb4c47", "text": "This paper develops analytical principles for torque ripple reduction in interior permanent magnet (IPM) synchronous machines. The significance of slot harmonics and the benefits of stators with odd number of slots per pole pair are highlighted. Based on these valuable analytical insights, this paper proposes coordination of the selection of stators with odd number of slots per pole pair and IPM rotors with multiple layers of flux barriers in order to reduce torque ripple. The effectiveness of using stators with odd number of slots per pole pair in reducing torque ripple is validated by applying a finite-element-based Monte Carlo optimization method to four IPM machine topologies, which are combinations of two stator topologies (even or odd number of slots per pole pair) and two IPM rotor topologies (one- or two-layer). It is demonstrated that the torque ripple can be reduced to less than 5% by selecting a stator with an odd number of slots per pole pair and the IPM rotor with optimized barrier configurations, without using stator/rotor skewing or rotor pole shaping.", "title": "" }, { "docid": "8e19813c7257c8d8d73867b9a4f9fa8d", "text": "Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities.", "title": "" }, { "docid": "523983cad60a81e0e6694c8d90ab9c3d", "text": "Cognition and comportment are subserved by interconnected neural networks that allow high-level computational architectures including parallel distributed processing. Cognitive problems are not resolved by a sequential and hierarchical progression toward predetermined goals but instead by a simultaneous and interactive consideration of multiple possibilities and constraints until a satisfactory fit is achieved. The resultant texture of mental activity is characterized by almost infinite richness and flexibility. According to this model, complex behavior is mapped at the level of multifocal neural systems rather than specific anatomical sites, giving rise to brain-behavior relationships that are both localized and distributed. Each network contains anatomically addressed channels for transferring information content and chemically addressed pathways for modulating behavioral tone. This approach provides a blueprint for reexploring the neurological foundations of attention, language, memory, and frontal lobe function.", "title": "" }, { "docid": "c052f693b65a0f3189fc1e9f4df11162", "text": "In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.", "title": "" }, { "docid": "11ce5f5e7c6249165ba2a5d8c3249c9f", "text": "BACKGROUND & AIMS\nHepatitis C virus (HCV) infection is a significant global health issue that leads to 350,000 preventable deaths annually due to associated cirrhosis and hepatocellular carcinoma (HCC). Immigrants and refugees (migrants) originating from intermediate/high HCV endemic countries are likely at increased risk for HCV infection due to HCV exposure in their countries of origin. The aim of this study was to estimate the HCV seroprevalence of the migrant population living in low HCV prevalence countries.\n\n\nMETHODS\nFour electronic databases were searched from database inception until June 17, 2014 for studies reporting the prevalence of HCV antibodies among migrants. Seroprevalence estimates were pooled with a random-effect model and were stratified by age group, region of origin and migration status and a meta-regression was modeled to explore heterogeneity.\n\n\nRESULTS\nData from 50 studies representing 38,635 migrants from all world regions were included. The overall anti-HCV prevalence (representing previous and current infections) was 1.9% (95% CI, 1.4-2.7%, I2 96.1). Older age and region of origin, particularly Sub-Saharan Africa, Asia, and Eastern Europe were the strongest predictors of HCV seroprevalence. The estimated HCV seroprevalence of migrants from these regions was >2% and is higher than that reported for most host populations.\n\n\nCONCLUSION\nAdult migrants originating from Asia, Sub-Saharan Africa and Eastern Europe are at increased risk for HCV and may benefit from targeted HCV screening.", "title": "" }, { "docid": "8e2006ca72dbc6be6592e21418b7f3ba", "text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.", "title": "" }, { "docid": "458d93cc710417f22ccf5fdd1c6c0a71", "text": "This paper presents a low-profile planar dipole antenna with omnidirectional radiation pattern and filtering response. The proposed antenna consists of a microstrip-to-slotline transition structure as the feeding network and a planar dipole as the radiator. Filtering response is obtained by adding nonradiative elements, including a coupled U-shaped microstrip line and two I-shaped slots, to the feeding network. Within the operating passband, the added nonradiative elements do not work, and thus the in-band radiation performance of the dipole antenna is nearly not affected. However, at the side stopbands, the added elements resonate and prevent the signal passing through the feeding network to the dipole antenna, suppressing the out-of-band radiation significantly. As a result, both satisfactory filtering and radiation performances are obtained. For demonstration, an omnidirectional filtering dipole antenna is implemented. Single-band bandpass filtering responses in both the reflection coefficient and realized gain are obtained. The measured in-band gain is ~2.5 dBi, whereas the out-of-band radiation suppression is more than 15 dB.", "title": "" }, { "docid": "115ed03ccee62fafc1606e6f6fdba1ce", "text": "High voltage SF6 circuit breaker must meet the breaking requirement for large short-circuit current, and ensure absence of breakdown after breaking small current. A 126kV high voltage SF6 circuit breaker was used as the research object in this paper. Based on the calculation results of non-equilibrium arc plasma material parameters, the distribution of pressure, temperature and density were calculated during the breaking progress. The electric field distribution was calculated in the course of flow movement, considering the influence of space charge on dielectric voltage. The change rule of the dielectric recovery progress was given based on the stream theory. The dynamic breakdown test circuit was built to measure the values of breakdown voltage under different open distance. The simulation results and experimental data are analyzed and the results show that: 1) Dielectric recovery speed (175kV/ms) is significantly faster than the voltage recovery rate (37.7kV/ms) during the arc extinguishing process. 2) The shorter the small current arcing time, the smaller the breakdown margin, so it is necessary to keep the arcing time longer than 0.5ms to ensure a large breakdown margin. 3) The calculated results are in good agreement with the experimental results. Since the breakdown voltage is less than the TRV in some test points, restrike may occur within 0.5ms after breaking, so arc extinguishment should be avoid in this time range.", "title": "" }, { "docid": "09f19a5e4751dc3ee4aa38817aafd3cf", "text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013", "title": "" }, { "docid": "14dc4a684d4c9ea310ae8b8b47dee3f6", "text": "Computational models in psychology are precise, fully explicit scientific hypotheses. Over the past 15 years, probabilistic modeling of human cognition has yielded quantitative theories of a wide variety of reasoning and learning phenomena. Recently, Marcus and Davis (2013) critique several examples of this work, using these critiques to question the basic validity of the probabilistic approach. Contra the broad rhetoric of their article, the points made by Marcus and Davis—while useful to consider—do not indicate systematic problems with the probabilistic modeling enterprise. Relevant and robust 3 Computational models in psychology are precise, fully explicit scientific hypotheses. Probabilistic models in particular formalize hypotheses about the beliefs of agents—their knowledge and assumptions about the world—using the structured collection of probabilities referred to as priors, likelihoods, etc. The probability calculus then describes inferences that can be drawn by combining these beliefs with new evidence, without the need to commit to a process-level explanation of how these inferences are performed (Marr, 1982). Over the past 15 years, probabilistic modeling of human cognition has yielded quantitative theories of a wide variety of phenomena (Tenenbaum, Kemp, Griffiths, & Goodman, 2011). Marcus and Davis (2013, henceforth, M&D) critique several examples of this work, using these critiques to question the basic validity of the probabilistic models approach, based on the existence of alternative models and potentially inconsistent data. Contra the broad rhetoric of their article, the points made by M&D—while useful to consider—do not indicate systematic problems with the probabilistic modeling enterprise. Several objections stem from a fundamental confusion about the status of optimality in probabilistic modeling, which has been discussed in responses to other critiques (see: Griffiths, Chater, Norris, & Pouget, 2012; Frank, 2013). Briefly: an optimal analysis is not the optimal analysis for a task or domain. Different probabilistic models instantiate different psychological hypotheses. Optimality provides a bridging assumption between these hypotheses and human behavior; one that can be re-examined or overturned as the data warrant. Model selection. M&D argue that individual probabilistic models require a host of potentially problematic modeling choices. Indeed, probabilistic models are created via a series of choices concerning priors, likelihoods, response functions, etc. Each of these choices embodies a proposal about cognition, and these proposals will often be wrong. The Relevant and robust 4 identification of model assumptions that result in a mismatch to empirical data allows these assumptions to be replaced or refined. Systematic iteration to achieve a better model is part of the normal progress of science. But if choices are made post-hoc, a model can be overfit to the particulars of the empirical data. M&D suggest that certain of our models suffer from this issue. For instance, they show that data on pragmatic inference (Frank & Goodman, 2012) are inconsistent with an alternative variant of the proposed model that uses a hard-max rather than a soft-max function, and ask whether the choice of soft-max was dependent on the data. The soft-max rule is foundational in economics, decision-theory, and cognitive psychology (Luce, 1959, 1977), and we first selected it for this problem based on a completely independent set of experiments (Frank, Goodman, Lai, & Tenenbaum, 2009). So it’s hard to see how a claim of overfitting is warranted here. Modelers must balance unification with exploration of model assumptions across tasks, but this issue is a general one for all computational work, and does not constitute a systematic problem with the probabilistic approach. Task selection. M&D suggested that probabilistic modelers report results on only the narrow range of tasks on which their models succeed. But their critique focused on a few high-profile, short reports that represented our first attempts to engage with important domains of cognition. Such papers necessarily have less in-depth engagement with empirical data than more extensive and mature work, though they also exemplify the applicability of probabilistic modeling to domains previously viewed as too complex for quantitative approaches. There is broader empirical adequacy to probabilistic models of cognition than M&D imply. If M&D had surveyed the literature they would have found substantial additional Relevant and robust 5 evidence for the models they reviewed—and more has accrued since their critique. For example, M&D critiqued Griffiths and Tenenbaum’s (2006) analysis of everyday predictions for failing to provide independent assessments of the contributions of priors and likelihoods, precisely what was done in several later and much longer papers (Griffiths & Tenenbaum, 2011; Lewandowsky, Griffiths, & Kalish, 2009). They similarly critiqued the particular tasks selected by Battaglia, Hamrick, and Tenenbaum (2013) without discussing the growing literature testing similar “noisy Newtonian” models on other phenomena (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2012; Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2014; Sanborn, Mansinghka, & Griffiths, 2013; Smith, Dechter, Tenenbaum, & Vul, 2013; Téglás et al., 2011). Smith, Battaglia, and Vul (2013) even directly address exactly the challenge M&D posed regarding classic findings of errors in physical intuitions. In other domains, such as concept learning and inductive inference, where there is an extensive experimental tradition, probabilistic models have engaged with diverse empirical data collected by multiple labs over many years (e.g. Goodman, Tenenbaum, Feldman, & Griffiths, 2008; Kemp & Tenenbaum, 2009). M&D also insinuate empirical problems that they do not test. For instance, in criticizing the choice of dependent measure used by Frank and Goodman (2012), they posit that a forced-choice task would yield a qualitatively different pattern (discrete rather than graded responding). In fact, a forced-choice version of the task produces graded patterns of responding across a wide variety of conditions (Stiller, Goodman, & Frank, 2011, 2014; Vogel, Emilsson, Frank, Jurafsky, & Potts, 2014). Conclusions. We agree with M&D that there are real and important challenges for probabilistic models of cognition, as there will be for any approach to modeling a system as complex as the human mind. To us, the most pressing challenges include understanding the Relevant and robust 6 relationship to lower levels of psychological analysis and neural implementation, integrating additional formal tools, clarifying the philosophical status of the models, extending to new domains of cognition, and, yes: engaging with additional empirical data in the current domains while unifying specific model choices into broader principles. As M&D state, “ultimately, the Bayesian approach should be seen as a useful tool”—one that we believe has already proven its robustness and relevance by allowing us to form and test quantitatively accurate psychological hypotheses. Relevant and robust 7", "title": "" }, { "docid": "9b40db1e69a3ad1cc2a1289791e82ae1", "text": "As a nascent area of study, gamification has attracted the interest of researchers in several fields, but such researchers have scarcely focused on creating a theoretical foundation for gamification research. Gamification involves using gamelike features in non-game contexts to motivate users and improve performance outcomes. As a boundary-spanning subject by nature, gamification has drawn the interest of scholars from diverse communities, such as information systems, education, marketing, computer science, and business administration. To establish a theoretical foundation, we need to clearly define and explain gamification in comparison with similar concepts and areas of research. Likewise, we need to define the scope of the domain and develop a research agenda that explicitly considers theory’s important role. In this review paper, we set forth the pre-theoretical structures necessary for theory building in this area. Accordingly, we engaged an interdisciplinary group of discussants to evaluate and select the most relevant theories for gamification. Moreover, we developed exemplary research questions to help create a research agenda for gamification. We conclude that using a multi-theoretical perspective in creating a research agenda should help and encourage IS researchers to take a lead role in this promising and emerging area.", "title": "" }, { "docid": "e3d0a58ddcffabb26d5e059d3ae6b370", "text": "HCI ( Human Computer Interaction ) studies the ways humans use digital or computational machines, systems or infrastructures. The study of the barriers encountered when users interact with the various interfaces is critical to improving their use, as well as their experience. Access and information processing is carried out today from multiple devices (computers, tablets, phones... ) which is essential to maintain a multichannel consistency. This complexity increases with environments in which we do not have much experience as users, where interaction with the machine is a challenge even in phases of research: virtual reality environments, augmented reality, or viewing and handling of large amounts of data, where the simplicity and ease of use are critical.", "title": "" }, { "docid": "575208e6df214fa4378fa18be48af51d", "text": "A parser based on logic programming language (DCG) has very useful features; perspicuity, power, generality and so on. However, it does have some drawbacks in which it cannot deal with CFG with left recursive rules, for example. To overcome these drawbacks, a Bottom-Up parser embedded in Prolog (BUP) has been developed. In BUP, CFG rules are translated into Prolog clauses which work as a bottom-up left corner parser with top-down expectation. BUP is augmented by introducing a “link” relation to reduce the size of a search space. Furthermore, BUP can be revised to maintain partial parsing results to avoid computational duplication. A BUP translator and a BUP tracer which support the development of grammar rules are described.", "title": "" }, { "docid": "9e3263866208bbc6a9019b3c859d2a66", "text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.", "title": "" }, { "docid": "597f097d5206fc259224b905d4d20e20", "text": "W e present here a QT database designed j b r evaluation of algorithms that detect waveform boundaries in the EGG. T h e dataabase consists of 105 fifteen-minute excerpts of two-channel ECG Holter recordings, chosen to include a broad variety of QRS and ST-T morphologies. Waveform bounda,ries for a subset of beats in, these recordings have been manually determined by expert annotators using a n interactive graphic disp1a.y to view both signals simultaneously and to insert the annotations. Examples of each m,orvhologg were inchded in this subset of uniaotated beats; at least 30 beats in each record, 3622 beats in all, were manually a:anotated in Ihe database. In 11 records, two indepen,dent sets of ennotations have been inchded, to a.llow inter-observer variability slwdies. T h e Q T Databnse is available on a CD-ROM in the format previously used for the MIT-BJH Arrhythmia Database ayad the Euro-pean ST-T Database, from which some of the recordings in the &T Database have been obtained.", "title": "" }, { "docid": "11bcb70c341366c170452e8dc77eb07a", "text": "Industrial software systems are known to be used for performing critical tasks in numerous fields. Faulty conditions in such systems can cause system outages that could lead to losses. In order to prevent potential system faults, it is important that anomalous conditions that lead to these faults are detected effectively. Nevertheless, the high complexity of the system components makes anomaly detection a high dimensional machine learning problem. This paper presents the application of a deep learning neural network known as Variational Autoencoder (VAE), as the solution to this problem. We show that, when used in an unsupervised manner, VAE outperforms the well-known clustering technique DBSCAN. Moreover, this paper shows that higher recall can be achieved using the semi-supervised one class learning of VAE, which uses only the normal data to train the model. Additionally, we show that one class learning of VAE outperforms semi-supervised one class SVM when training data consist of only a very small amount of anomalous samples. When a tree based ensemble technique is adopted for feature selection, the obtained results evidently demonstrate that the performance of the VAE is highly positively correlated with the selected feature set.", "title": "" }, { "docid": "e91dd3f9e832de48a27048a0efa1b67a", "text": "Smart Home technology is the future of residential related technology which is designed to deliver and distribute number of services inside and outside the house via networked devices in which all the different applications & the intelligence behind them are integrated and interconnected. These smart devices have the potential to share information with each other given the permanent availability to access the broadband internet connection. Hence, Smart Home Technology has become part of IoT (Internet of Things). In this work, a home model is analyzed to demonstrate an energy efficient IoT based smart home. Several Multiphysics simulations were carried out focusing on the kitchen of the home model. A motion sensor with a surveillance camera was used as part of the home security system. Coupled with the home light and HVAC control systems, the smart system can remotely control the lighting and heating or cooling when an occupant enters or leaves the kitchen.", "title": "" }, { "docid": "aba638a83116131a62dcce30a7470252", "text": "A general method is proposed to automatically generate a DfT solution aiming at the detection of catastrophic faults in analog and mixed-signal integrated circuits. The approach consists in modifying the topology of the circuit by pulling up (down) nodes and then probing differentiating node voltages. The method generates a set of optimal hardware implementations addressing the multi-objective problem such that the fault coverage is maximized and the silicon overhead is minimized. The new method was applied to a real-case industrial circuit, demonstrating a nearly 100 percent coverage at the expense of an area increase of about 5 percent.", "title": "" }, { "docid": "d911ccb1bbb761cbfee3e961b8732534", "text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.", "title": "" }, { "docid": "6ec4c9e6b3e2a9fd4da3663a5b21abcd", "text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.", "title": "" } ]
scidocsrr
eab71f73442374babf2640bfcff77394
Cyber Bullying Detection Using Social and Textual Analysis
[ { "docid": "01a1693eb4a50bff875685fb3a9335fa", "text": "Cyber bullying is the use of technology as a medium to bully someone. Although it has been an issue for many years, the recognition of its impact on young people has recently increased. Social networking sites provide a fertile medium for bullies, and teens and young adults who use these sites are vulnerable to attacks. Through machine learning, we can detect language patterns used by bullies and their victims, and develop rules to automatically detect cyber bullying content. The data we used for our project was collected from the website Formspring.me, a question-and-answer formatted website that contains a high percentage of bullying content. The data was labeled using a web service, Amazon's Mechanical Turk. We used the labeled data, in conjunction with machine learning techniques provided by the Weka tool kit, to train a computer to recognize bullying content. Both a C4.5 decision tree learner and an instance-based learner were able to identify the true positives with 78.5% accuracy.", "title": "" }, { "docid": "fcbfa224b2708839e39295f24f4405e1", "text": "A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult \"real-world\" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced andlor the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.", "title": "" }, { "docid": "ac1ca3af693f295cae5a1c5ae74b7caa", "text": "The negative consequences of cyberbullying are becoming more alarming every day and technical solutions that allow for taking appropriate action by means of automated detection are still very limited. Up until now, studies on cyberbullying detection have focused on individual comments only, disregarding context such as users’ characteristics and profile information. In this paper we show that taking user context into account improves the detection of cyberbullying.", "title": "" } ]
[ { "docid": "d6bc7d628187c907857ede30585330f2", "text": "Activation of the trigemino-cervical system constitutes one of the first steps in the genesis of migraine. The objective of this study was to confirm the presence of trigemino-cervical convergence mechanisms and to establish whether such mechanisms may also be of inhibitory origin. We describe a case of a 39-years-old woman suffering from episodic migraine who showed a significant improvement in her frontal headache during migraine attacks if the greater occipital nerve territory was massaged after the appearance of static mechanical allodynia (cortical sensitization). We review trigemino-cervical convergence and diffuse nociceptive inhibitory control (DNIC) mechanisms and suggest that the convergence mechanisms are not only excitatory but also inhibitory.", "title": "" }, { "docid": "c591881de09c709ae2679cacafe24008", "text": "This paper discusses a technique to estimate the position of a sniper using a spatial microphone array placed on elevated platforms. The shooter location is obtained from the exact location of the microphone array, from topographic information of the area and from an estimated direction of arrival (DoA) of the acoustic wave related to the explosion in the gun barrel, which is known as muzzle blast. The estimation of the DOA is based on the time differences the sound wavefront arrives at each pair of microphones, employing a technique known as Generalized Cross Correlation (GCC) with phase transform. The main idea behind the localization procedure used herein is that, based on the DoA, the acoustical path of the muzzle blast (from the weapon to the microphone) can be marked as a straight line on a terrain profile obtained from an accurate digital map, allowing the estimation of the shooter location whenever the microphone array is located on an dominant position. In addition, a new approach to improve the DoA estimation from a cognitive selection of microphones is introduced. In this technique, the microphones selected must form a consistent (sum of delays equal to zero) fundamental loop. The results obtained after processing muzzle blast gunshot signals recorded in a typical scenario, show the effectiveness of the proposed method.", "title": "" }, { "docid": "0a7558a172509707b33fcdfaafe0b732", "text": "Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.", "title": "" }, { "docid": "e0ec22fcdc92abe141aeb3fa67e9e55a", "text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack", "title": "" }, { "docid": "ca1a2eafb7d21438bc933c195c94a49d", "text": "   The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control.    MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams.    MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process.    MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today’s and tomorrow’s clinically motivated research.", "title": "" }, { "docid": "b84971bc1f2d2ebf43815d33cea86c8c", "text": "The container-inhabiting mosquito simulation model (CIMSiM) is a weather-driven, dynamic life table simulation model of Aedes aegypti (L.) and similar nondiapausing Aedes mosquitoes that inhabit artificial and natural containers. This paper presents a validation of CIMSiM simulating Ae. aegypti using several independent series of data that were not used in model development. Validation data sets include laboratory work designed to elucidate the role of diet on fecundity and rates of larval development and survival. Comparisons are made with four field studies conducted in Bangkok, Thailand, on seasonal changes in population dynamics and with a field study in New Orleans, LA, on larval habitat. Finally, predicted ovipositional activity of Ae. aegypti in seven cities in the southeastern United States for the period 1981-1985 is compared with a data set developed by the U.S. Public Health Service. On the basis of these comparisons, we believe that, for stated design goals, CIMSiM adequately simulates the population dynamics of Ae. aegypti in response to specific information on weather and immature habitat. We anticipate that it will be useful in simulation studies concerning the development and optimization of control strategies and that, with further field validation, can provide entomological inputs for a dengue virus transmission model.", "title": "" }, { "docid": "81e0b85a142a81f9e2012f050c43fb43", "text": "The activation of under frequency load shedding (UFLS) is the last automated action against the severe frequency drops in order to rebalance the system. In this paper, the setting parameters of a multistage load shedding plan are obtained and optimized using a discretized model of dynamic system frequency response. The uncertainties of system parameters including inertia time constant, load damping, and generation deficiency are taken into account. The proposed UFLS model is formulated as a mixed-integer linear programming optimization problem to minimize the expected amount of load shedding. The activation of rate-of-change-of-frequency relays as the anti-islanding protection of distributed generators is considered. The Monte Carlo simulation method is utilized for modeling the uncertainties of system parameters. The results of probabilistic UFLS are then utilized to design four different UFLS strategies. The proposed dynamic UFLS plans are simulated over the IEEE 39-bus and the large-scale practical Iranian national grid.", "title": "" }, { "docid": "628c8b906e3db854ea92c021bb274a61", "text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.", "title": "" }, { "docid": "81e6994ef76d537b8905cf6b8271c895", "text": "Programming language design benefits from constructs for extending the syntax and semantics of a host language. While C's string-based macros empower programmers to introduce notational shorthands, the parser-level macros of Lisp encourage experimentation with domain-specific languages. The Scheme programming language improves on Lisp with macros that respect lexical scope.\n The design of Racket---a descendant of Scheme---goes even further with the introduction of a full-fledged interface to the static semantics of the language. A Racket extension programmer can thus add constructs that are indistinguishable from \"native\" notation, large and complex embedded domain-specific languages, and even optimizing transformations for the compiler backend. This power to experiment with language design has been used to create a series of sub-languages for programming with first-class classes and modules, numerous languages for implementing the Racket system, and the creation of a complete and fully integrated typed sister language to Racket's untyped base language.\n This paper explains Racket's language extension API via an implementation of a small typed sister language. The new language provides a rich type system that accommodates the idioms of untyped Racket. Furthermore, modules in this typed language can safely exchange values with untyped modules. Last but not least, the implementation includes a type-based optimizer that achieves promising speedups. Although these extensions are complex, their Racket implementation is just a library, like any other library, requiring no changes to the Racket implementation.", "title": "" }, { "docid": "c69d15a44bcb779394df5776e391ec23", "text": "Ankylosing spondylitis (AS) is a chronic and inflammatory rheumatic disease, characterized by pain and structural and functional impairments, such as reduced mobility and axial deformity, which lead to diminished quality of life. Its treatment includes not only drugs, but also nonpharmacological therapy. Exercise appears to be a promising modality. The aim of this study is to review the current evidence and evaluate the role of exercise either on land or in water for the management of patients with AS in the biological era. Systematic review of the literature published until November 2016 in Medline, Embase, Cochrane Library, Web of Science and Scopus databases. Thirty-five studies were included for further analysis (30 concerning land exercise and 5 concerning water exercise; combined or not with biological drugs), comprising a total of 2515 patients. Most studies showed a positive effect of exercise on Bath Ankylosing Spondylitis Disease Activity Index, Bath Ankylosing Spondylitis Functional Index, pain, mobility, function and quality of life. The benefit was statistically significant in randomized controlled trials. Results support a multimodal approach, including educational sessions and maintaining home-based program. This study highlights the important role of exercise in management of AS, therefore it should be encouraged and individually prescribed. More studies with good methodological quality are needed to strengthen the results and to define the specific characteristics of exercise programs that determine better results.", "title": "" }, { "docid": "0e965b8941ddb47760300a35b80545be", "text": "Pathological lung segmentation (PLS) is an important, yet challenging, medical image application due to the wide variability of pathological lung appearance and shape. Because PLS is often a prerequisite for other imaging analytics, methodological simplicity and generality are key factors in usability. Along those lines, we present a bottomup deep-learning based approach that is expressive enough to handle variations in appearance, while remaining unaffected by any variations in shape. We incorporate the deeply supervised learning framework, but enhance it with a simple, yet effective, progressive multi-path scheme, which more reliably merges outputs from different network stages. The result is a deep model able to produce finer detailed masks, which we call progressive holistically-nested networks (P-HNNs). Using extensive cross-validation, our method is tested on a multi-institutional dataset comprising 929 CT scans (848 publicly available), of pathological lungs, reporting mean dice scores of 0.985 and demonstrating significant qualitative and quantitative improvements over state-of-the art approaches.", "title": "" }, { "docid": "358faa358eb07b8c724efcdb72334dc7", "text": "We present a novel simple technique for rapidly creating and presenting interactive immersive 3D exploration experiences of 2D pictures and images of natural and artificial landscapes. Various application domains, ranging from virtual exploration of works of art to street navigation systems, can benefit from the approach. The method, dubbed PEEP, is motivated by the perceptual characteristics of the human visual system in interpreting perspective cues and detecting relative angles between lines. It applies to the common perspective images with zero or one vanishing points, and does not require the extraction of a precise geometric description of the scene. Taking as input a single image without other information, an automatic analysis technique fits a simple but perceptually consistent parametric 3D representation of the viewed space, which is used to drive an indirect constrained exploration method capable to provide the illusion of 3D exploration with realistic monocular (perspective and motion parallax) and binocular (stereo) depth cues. The effectiveness of the method is demonstrated on a variety of casual pictures and exploration configurations, including mobile devices.", "title": "" }, { "docid": "43259d0deae71a36d27bebcc15ee5a9b", "text": "The Great Ordovician Biodiversification Event” (GOBE) was arguably the most important and sustained increase of marine biodiversity in Earth’s history. During a short time span of 25 Ma, an “explosion” of diversity at the order, family, genus, and species level occurred. The combined effects of several geological and biological processes helped generate the GOBE. The peak of the GOBE correlates with unique paleogeography, featuring the greatest continental dispersal of the Paleozoic. Rapid sea-floor spreading during this time coincided with warm climates, high sea levels, and the largest tropical shelf area of the Phanerozoic. In addition, important ecological evolutionary changes took place, with the “explosion” of both zooplankton and suspension feeding organisms, possibly based on increased phytoplankton availability and high nutrient input to the oceans driven by intense volcanic activity. Extraterrestrial causes, in the form of asteroid impacts, have also been invoked to explain this remarkable event. INTRODUCTION Although the five major mass extinctions (in particular, the Permian-Triassic and the Cretaceous-Tertiary events) have been extensively documented, until recently, the major biodiversifications and radiations of life on Earth have attracted much less attention. The so-called “Cambrian explosion” is in many ways much better known than the Ordovician and Mesozoic-Cenozoic radiations of marine invertebrates. Although the Cambrian explosion resulted in a range of new and spectacular animal body plans, mostly known from famous FossilLagerstätten, such as the Burgess Shale (Canada), Chengjiang (China), and Sirius Passet (Greenland), the Ordovician radiation is dramatic in different ways (Droser and Finnegan, 2003) and is evident in the “normal” shelly fossil record. The term “The Great Ordovician Biodiversification Event” (GOBE) has been introduced to designate what is arguably the most important increase of biodiversity of marine life during Earth’s history (Webby et al., 2004). While the “Cambrian explosion” involved the origins of skeletalization and a range of new body plans, the Ordovician biodiversification generated few new higher taxa but witnessed a staggering increase in disparity and biodiversity (e.g., Harper, 2006). Barnes et al. (1995) reviewed the global bio-events during the Ordovician, and two international research projects have since targeted the Ordovician biodiversification. International Geoscience Programme (IGCP) Project 410, “The Great Ordovician Biodiversification Event” (1997–2002), resulted in a compilation of biodiversity curves for all fossil groups of the Ordovician biota (Webby et al., 2004). In this compilation, the dramatic increase of diversity of all groups at the specific and/or the generic level became obvious and confirmed the patterns based on previous diversity counts (e.g., Sepkoski, 1981). IGCP 503 started in 2004 under the banner of “Ordovician Palaeogeography and Palaeoclimate” and has focused on the causes and the geological context of the Ordovician biodiversification, including radical changes in the marine trophic chains. Possible triggers of the GOBE may include the near-unique paleogeography, the distinctive paleoclimate, the highest sea levels of the Paleozoic (if not the entire Phanerozoic), enhanced nutrient supply as a result of pronounced volcanic activity, and major ecological changes. In addition to these Earth-bound physical and biological drivers of biodiversity change, Schmitz et al. (2008) linked the onset of the major phase of the Ordovician biodiversification with the largest documented asteroid breakup event during the past few billion years. It seems likely that the GOBE was linked to a variety of coincident and interconnected factors. Here we review recent studies, ask “What generated the GOBE?” and indicate the perspectives for future research in this exciting and rapidly advancing field. GSA Today, v. 19, no. 4/5, doi: 10.1130/GSATG37A.1 E-mails: thomas.servais@univ-lille1.fr, dharper@snm.ku.dk, junli@nigpas.ac.cn, axel.munnecke@pal.uni-erlangen.de, alan.owen@ges.gla.ac.uk,", "title": "" }, { "docid": "14f127a8dd4a0fab5acd9db2a3924657", "text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].", "title": "" }, { "docid": "b69f2c426f86ad0e07172eb4d018b818", "text": "Versatile motor skills for hitting and throwing motions can be observed in humans already in early ages. Future robots require high power-to-weight ratios as well as inherent long operational lifetimes without breakage in order to achieve similar perfection. Robustness due to passive compliance and high-speed catapult-like motions as possible with fast energy release are further beneficial characteristics. Such properties can be realized with antagonistic muscle-based designs. Additionally, control algorithms need to exploit the full potential of the robot. Learning control is a promising direction due to its the potential to capture uncertainty and control of complex systems. The aim of this paper is to build a robotic arm that is capable of generating high accelerations and sophisticated trajectories as well as enable exploration at such speeds for robot learning approaches. Hence, we have designed a light-weight robot arm with moving masses below 700 g with powerful antagonistic compliant actuation with pneumatic artificial muscles. Rather than recreating human anatomy, our system is designed to be easy to control in order to facilitate future learning of fast trajectory tracking control. The resulting robot is precise at low speeds using a simple PID controller while reaching high velocities of up to 12 m/s in task space and 1500 deg/s in joint space. This arm will enable new applications in fast changing and uncertain task like robot table tennis while being a sophisticated and reproducible test-bed for robot skill learning methods. Construction details are available.", "title": "" }, { "docid": "b776307764d3946fc4e7f6158b656435", "text": "Recent development advances have allowed silicon (Si) semiconductor technology to approach the theoretical limits of the Si material; however, power device requirements for many applications are at a point that the present Si-based power devices can not handle. The requirements include higher blocking voltages, switching frequencies, efficiency, and reliability. To overcome these limitations, new semiconductor materials for power device applications are needed. For high power requirements, wide band gap semiconductors like silicon carbide (SiC), gallium nitride (GaN), and diamond with their superior electrical properties are likely candidates to replace Si in the near future. This paper compares all the aforementioned wide bandgap semiconductors with respect to their promise and applicability for power applications and predicts the future of power device semiconductor materials.", "title": "" }, { "docid": "344db754658e580ea441c44987b09286", "text": "Online learning to rank for information retrieval (IR) holds promise for allowing the development of \"self-learning\" search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.\n In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our pre-selection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.", "title": "" }, { "docid": "c4d0dc9ef6e982fbfd218fb7b4c92f68", "text": "In this paper, we present new theoretical and experimental results for bidirectional A∗ search. Unlike most previous research on this topic, our results do not require assumptions of either consistent or balanced heuristic functions for the search. Our theoretical work examines new results on the worst-case number of node expansions for inconsistent heuristic functions with bounded estimation errors. Additionally, we consider several alternative termination criteria in order to more quickly terminate the bidirectional search, and we provide worst-case approximation bounds for our suggested criteria. We prove that our approximation bounds are purely additive in nature (a general improvement over previous multiplicative approximations). Experimental evidence on large-scale road networks suggests that the errors introduced are truly quite negligible in practice, while the performance gains are significant.", "title": "" }, { "docid": "26666f489e169255b3d34deb42ae8ad9", "text": "OBJECTIVES\nTo evaluate the registration of 3D models from cone-beam CT (CBCT) images taken before and after orthognathic surgery for the assessment of mandibular anatomy and position.\n\n\nMETHODS\nCBCT scans were taken before and after orthognathic surgery for ten patients with various malocclusions undergoing maxillary surgery only. 3D models were constructed from the CBCT images utilizing semi-automatic segmentation and manual editing. The cranial base was used to register 3D models of pre- and post-surgery scans (1 week). After registration, a novel tool allowed the visual and quantitative assessment of post-operative changes via 2D overlays of superimposed models and 3D coloured displacement maps.\n\n\nRESULTS\n3D changes in mandibular rami position after surgical procedures were clearly illustrated by the 3D colour-coded maps. The average displacement of all surfaces was 0.77 mm (SD=0.17 mm), at the posterior border 0.78 mm (SD=0.25 mm), and at the condyle 0.70 mm (SD=0.07 mm). These displacements were close to the image spatial resolution of 0.60 mm. The average interobserver differences were negligible. The range of the interobserver errors for the average of all mandibular rami surface distances was 0.02 mm (SD=0.01 mm).\n\n\nCONCLUSION\nOur results suggest this method provides a valid and reproducible assessment of craniofacial structures for patients undergoing orthognathic surgery. This technique may be used to identify different patterns of ramus and condylar remodelling following orthognathic surgery.", "title": "" }, { "docid": "47e17d9a02c6a97188108b49f67f986b", "text": "Driver's gaze direction is an indicator of driver state and plays a significantly role in driving safety. Traditional gaze zone estimation methods based on eye model have disadvantages due to the vulnerability under large head movement. Different from these methods, an appearance-based head pose-free eye gaze prediction method is proposed in this paper, for driver gaze zone estimation under free head movement. To achieve this goal, a gaze zone classifier is trained with head vectors and eye image features by random forest. The head vector is calculated by Pose from Orthography and Scaling with ITerations (POSIT) where a 3D face model is combined with facial landmark detection. And the eye image features are derived from eye images which extracted through eye region localization. These features are presented as the combination of sparse coefficients by sparse encoding with eye image dictionary, having good potential to carry information of the eye images. Experimental results show that the proposed method is applicable in real driving environment.", "title": "" } ]
scidocsrr
753cdccecf1a83a60dd595b9095c08f2
Neural Network based Extreme Classification and Similarity Models for Product Matching
[ { "docid": "755f7e93dbe43a0ed12eb90b1d320cb2", "text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "9d632b6a40551697a250b2017c29981c", "text": "In this paper, a novel framework for dense pixel matching based on dynamic programming is introduced. Unlike most techniques proposed in the literature, our approach assumes neither known camera geometry nor the availability of rectified images. Under such conditions, the matching task cannot be reduced to finding correspondences between a pair of scanlines. We propose to extend existing dynamic programming methodologies to a larger dimensional space by using a 3D scoring matrix so that correspondences between a line and a whole image can be calculated. After assessing our framework on a standard evaluation dataset of rectified stereo images, experiments are conducted on unrectified and non-linearly distorted images. Results validate our new approach and reveal the versatility of our algorithm.", "title": "" }, { "docid": "107d6605a6159d5a278b49b8c020cdd9", "text": "Internet applications increasingly rely on scalable data structures that must support high throughput and store huge amounts of data. These data structures can be hard to implement efficiently. Recent proposals have overcome this problem by giving up on generality and implementing specialized interfaces and functionality (e.g., Dynamo [4]). We present the design of a more general and flexible solution: a fault-tolerant and scalable distributed B-tree. In addition to the usual B-tree operations, our B-tree provides some important practical features: transactions for atomically executing several operations in one or more B-trees, online migration of B-tree nodes between servers for load-balancing, and dynamic addition and removal of servers for supporting incremental growth of the system. Our design is conceptually simple. Rather than using complex concurrency and locking protocols, we use distributed transactions to make changes to B-tree nodes. We show how to extend the B-tree and keep additional information so that these transactions execute quickly and efficiently. Our design relies on an underlying distributed data sharing service, Sinfonia [1], which provides fault tolerance and a light-weight distributed atomic primitive. We use this primitive to commit our transactions. We implemented our B-tree and show that it performs comparably to an existing open-source B-tree and that it scales to hundreds of machines. We believe that our approach is general and can be used to implement other distributed data structures easily.", "title": "" }, { "docid": "ce2139f51970bfa5bd3738392f55ea48", "text": "A novel type of dual circular polarizer for simultaneously receiving and transmitting right-hand and left-hand circularly polarized waves is developed and tested. It consists of a H-plane T junction of rectangular waveguide, one circular waveguide as an Eplane arm located on top of the junction, and two metallic pins used for matching. The theoretical analysis and design of the three-physicalport and four-mode polarizer were researched by solving ScatteringMatrix of the network and using a full-wave electromagnetic simulation tool. The optimized polarizer has the advantages of a very compact size with a volume smaller than 0.6λ3, low complexity and manufacturing cost. A couple of the polarizer has been manufactured and tested, and the experimental results are basically consistent with the theories.", "title": "" }, { "docid": "36a615660b8f0c60bef06b5a57887bd1", "text": "Quantum cryptography is an emerging technology in which two parties can secure network communications by applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of the laws of quantum mechanics. Quantum cryptography was born in the early seventies when Steven Wiesner wrote \"Conjugate Coding\", which took more than ten years to end this paper. The quantum cryptography relies on two important elements of quantum mechanics - the Heisenberg Uncertainty principle and the principle of photon polarization. The Heisenberg Uncertainty principle states that, it is not possible to measure the quantum state of any system without distributing that system. The principle of photon polarization states that, an eavesdropper can not copy unknown qubits i.e. unknown quantum states, due to no-cloning theorem which was first presented by Wootters and Zurek in 1982. This research paper concentrates on the theory of  quantum cryptography, and how this technology contributes to the network security. This research paper summarizes the current state of quantum cryptography, and the real–world application implementation of this technology, and finally the future direction in which the quantum cryptography is headed forwards.", "title": "" }, { "docid": "5b43cce2027f1e5afbf7985ca2d4af1a", "text": "With Internet delivery of video content surging to an unprecedented level, video has become one of the primary sources for online advertising. In this paper, we present VideoSense as a novel contextual in-video advertising system, which automatically associates the relevant video ads and seamlessly inserts the ads at the appropriate positions within each individual video. Unlike most video sites which treat video advertising as general text advertising by displaying video ads at the beginning or the end of a video or around a video, VideoSense aims to embed more contextually relevant ads at less intrusive positions within the video stream. Specifically, given a Web page containing an online video, VideoSense is able to extract the surrounding text related to this video, detect a set of candidate ad insertion positions based on video content discontinuity and attractiveness, select a list of relevant candidate ads according to multimodal relevance. To support contextual advertising, we formulate this task as a nonlinear 0-1 integer programming problem by maximizing contextual relevance while minimizing content intrusiveness at the same time. The experiments proved the effectiveness of VideoSense for online video service.", "title": "" }, { "docid": "41a15d3dcca1ff835b5d983a8bb5343f", "text": "and is made available as an electronic reprint (preprint) with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. ABSTRACT We describe the architecture and design of a through-the-wall radar. The radar is applied for the detection and localization of people hidden behind obstacles. It implements a new adaptive processing technique for people detection, which is introduced in this article. This processing technique is based on exponential averaging with adopted weighting coefficients. Through-the-wall detection and localization of a moving person is demonstrated by a measurement example. The localization relies on the time-of-flight approach.", "title": "" }, { "docid": "1e90c85e21c0248e70fae594b152aa8e", "text": "We recently demonstrated a high function wrist watch computer prototype that runs the Linux operating system and also X11 graphics libraries. In this paper we describe the unique energy related challenges and tradeoffs we encountered while building this watch. We show that the usage duty factor for the device heavily dictates which of the powers, active power or sleep power, needs to be minimized more aggressively in order to achieve the longest perceived battery life. We also describe the energy issues that percolate through several layers of software all the way from device usage scenarios, applications, user interfaces, system level software to device drivers and the need to systematically address all of them to achieve the battery life dictated by the hardware components and the capacity of the battery in the device.", "title": "" }, { "docid": "17bc705ba1e4ee9f5620187582be60cc", "text": "A new approach to the synthesis of longitudinal autopilots for missiles flying at high angle of attack regimes is presented. The methodology is based on sliding mode control, and uses a combination of aerodynamic surfaces and reaction jet thrusters, to achieve controllability beyond stall. The autopilot is tested on a small section of the flight envelope consisting of a fast 180 heading reversal in the vertical plane, which requires robustness with respect to uncertainties in the system’s dynamics induced by large variations in dynamic pressure and aerodynamic coefficients. Nonlinear simulation results show excellent performance and capabilities of the control system structure.", "title": "" }, { "docid": "27be379b6192aa6db9101b7ec18d5585", "text": "In this paper, we investigate the problem of detecting depression from recordings of subjects' speech using speech processing and machine learning. There has been considerable interest in this problem in recent years due to the potential for developing objective assessments from real-world behaviors, which may provide valuable supplementary clinical information or may be useful in screening. The cues for depression may be present in “what is said” (content) and “how it is said” (prosody). Given the limited amounts of text data, even in this relatively large study, it is difficult to employ standard method of learning models from n-gram features. Instead, we learn models using word representations in an alternative feature space of valence and arousal. This is akin to embedding words into a real vector space albeit with manual ratings instead of those learned with deep neural networks [1]. For extracting prosody, we employ standard feature extractors such as those implemented in openSMILE and compare them with features extracted from harmonic models that we have been developing in recent years. Our experiments show that our features from harmonic model improve the performance of detecting depression from spoken utterances than other alternatives. The context features provide additional improvements to achieve an accuracy of about 74%, sufficient to be useful in screening applications.", "title": "" }, { "docid": "485cda7203863d2ff0b2070ca61b1126", "text": "Interestingly, understanding natural language that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.", "title": "" }, { "docid": "773b5914dce6770b2db707ff4536c7f6", "text": "This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.", "title": "" }, { "docid": "972be3022e7123be919d9491a6dafe1c", "text": "An improved coaxial high-voltage vacuum insulator applied in a Tesla-type generator, model TPG700, has been designed and tested for high-power microwave (HPM) generation. The design improvements include: changing the connection type of the insulator to the conductors from insertion to tangential, making the insulator thickness uniform, and using Nylon as the insulation material. Transient field simulation shows that the electric field (E-field) distribution within the improved insulator is much more uniform and that the average E-field on the two insulator surfaces is decreased by approximately 30% compared with the previous insulator at a voltage of 700 kV. Key structures such as the anode and the cathode shielding rings of the insulator have been optimized to significantly reduce E-field stresses. Aging experiments and experiments for HPM generation with this insulator were conducted based on a relativistic backward-wave oscillator. The preliminary test results show that the output voltage is larger than 700 kV and the HPM power is about 1 GW. Measurements show that the insulator is well within allowable E-field stresses on both the vacuum insulator surface and the cathode shielding ring.", "title": "" }, { "docid": "b47bbb2a59a26fb0d9c2987bc308bc9d", "text": "Nasal reconstruction is always challenging for plastic surgeons. Its midfacial localisation and the relationship between convexities and concavities of nasal subunits make impossible to hide any sort of deformity without a proper reconstruction. Nasal tissue defects can be caused by tumor removal, trauma or by any other insult to the nasal pyramid, like cocaine abuse, developing an irreversible sequela. Due to the special characteristics of the nasal pyramid surface, the removal of the lesion or the debridement must be performed according to nasal subunits as introduced by Burget. Afterwards, the reconstructive technique or a combination of them must be selected according to the size and the localisation of the defect created, and tissue availability to fulfil the procedure. An anatomical reconstruction must be completed as far as possible, trying to restore the nasal lining, the osteocartilaginous framework and the skin cover. In our department, 35 patients were operated on between 2000 and 2002: three bilobed flaps, five nasolabial flaps, two V-Y advancement flaps from the sidewall, three dorsonasal flaps modified by Ohsumi, 19 paramedian forehead flaps, three cheek advancement flaps, three costocondral grafts, two full-thickness skin grafts and two auricular helix free flaps for alar reconstruction. All flaps but one free flap survived with no postoperative complications. After 12-24 months of follow-up, all reconstructions remained stable from cosmetic and functional point of view. Our aim is to present our choice for nasal reconstruction according to the size and localization of the defect, and donor tissue availability.", "title": "" }, { "docid": "655e2fda8fd2e8f7a665ca64047399a0", "text": "This article describes a self-propelled dolphin robot that aims to create a stable and controllable experimental platform. A viable bioinspired approach to generate diverse instances of dolphin-like swimming online via a center pattern generator (CPG) network is proposed.The characteristic parameters affecting three-dimensional (3-D) swimming performance are further identified and discussed. Both interactive and programmed swimming tests are provided to illustrate the validity of the present scheme.", "title": "" }, { "docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b", "text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.", "title": "" }, { "docid": "e16c419551e73e9787029460f7047b4d", "text": "Cloud Computing with Virtualization offers attractive flexibility and elasticity to deliver resources by providing a platform for consolidating complex IT resources in a scalable manner. However, efficiently running HPC applications on Cloud Computing systems is still full of challenges. One of the biggest hurdles in building efficient HPC clouds is the unsatisfactory performance offered by underlying virtualized environments, more specifically, virtualized I/O devices. Recently, Single Root I/O Virtualization (SR-IOV) technology has been steadily gaining momentum for high-performance interconnects such as InfiniBand and 10GigE. Due to its near native performance for inter-node communication, many cloud systems such as Amazon EC2 have been using SR-IOV in their production environments. Nevertheless, recent studies have shown that the SR-IOV scheme lacks locality aware communication support, which leads to performance overheads for inter-VM communication within the same physical node. In this paper, we propose an efficient approach to build HPC clouds based on MVAPICH2 over Open Stack with SR-IOV. We first propose an extension for Open Stack Nova system to enable the IV Shmem channel in deployed virtual machines. We further present and discuss our high-performance design of virtual machine aware MVAPICH2 library over Open Stack-based HPC Clouds. Our design can fully take advantage of high-performance SR-IOV communication for inter-node communication as well as Inter-VM Shmem (IVShmem) for intra-node communication. A comprehensive performance evaluation with micro-benchmarks and HPC applications has been conducted on an experimental Open Stack-based HPC cloud and Amazon EC2. The evaluation results on the experimental HPC cloud show that our design and extension can deliver near bare-metal performance for implementing SR-IOV-based HPC clouds with virtualization. Further, compared with the performance on EC2, our experimental HPC cloud can exhibit up to 160X, 65X, 12X improvement potential in terms of point-to-point, collective and application for future HPC clouds.", "title": "" }, { "docid": "2e66317dfe4005c069ceac2d4f9e3877", "text": "The Semantic Web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting Ginseng, a quasi natural language guided query interface to the Semantic Web. Ginseng relies on a simple question grammar which gets dynamically extended by the structure of an ontology to guide users in formulating queries in a language seemingly akin to English. Based on the grammar Ginseng then translates the queries into a Semantic Web query language (RDQL), which allows their execution. Our evaluation with 20 users shows that Ginseng is extremely simple to use without any training (as opposed to any logic-based querying approach) resulting in very good query performance (precision = 92.8%, recall = 98.4%). We, furthermore, found that even with its simple grammar/approach Ginseng could process over 40% of questions from a query corpus without modification.", "title": "" }, { "docid": "7401d33980f6630191aa7be7bf380ec3", "text": "We present PennCOSYVIO, a new challenging Visual Inertial Odometry (VIO) benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras. Recorded at UPenn's Singh center, the 150m long path of the hand-held rig crosses from outdoors to indoors and includes rapid rotations, thereby testing the abilities of VIO and Simultaneous Localization and Mapping (SLAM) algorithms to handle changes in lighting, different textures, repetitive structures, and large glass surfaces. All sensors are synchronized and intrinsically and extrinsically calibrated. We demonstrate the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers. The data set can be found at https://daniilidis-group.github.io/penncosyvio/.", "title": "" } ]
scidocsrr
eabe324a2abbd5aa247017c3b62cc6c5
Investigation into Big Data Impact on Digital Marketing
[ { "docid": "a2047969c4924a1e93b805b4f7d2402c", "text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.", "title": "" }, { "docid": "0994065c757a88373a4d97e5facfee85", "text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.", "title": "" } ]
[ { "docid": "1145d2375414afbdd5f1e6e703638028", "text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).", "title": "" }, { "docid": "00277e4562f707d37844e6214d1f8777", "text": "Video super-resolution (SR) aims at estimating a high-resolution video sequence from a low-resolution (LR) one. Given that the deep learning has been successfully applied to the task of single image SR, which demonstrates the strong capability of neural networks for modeling spatial relation within one single image, the key challenge to conduct video SR is how to efficiently and effectively exploit the temporal dependence among consecutive LR frames other than the spatial relation. However, this remains challenging because the complex motion is difficult to model and can bring detrimental effects if not handled properly. We tackle the problem of learning temporal dynamics from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependence. Inspired by the inception module in GoogLeNet [1], filters of various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated, in order to fully exploit the temporal relation among the consecutive LR frames. Second, we decrease the complexity of motion among neighboring frames using a spatial alignment network that can be end-to-end trained with the temporal adaptive network and has the merit of increasing the robustness to complex motion and the efficiency compared with the competing image alignment methods. We provide a comprehensive evaluation of the temporal adaptation and the spatial alignment modules. We show that the temporal adaptive design considerably improves the SR quality over its plain counterparts, and the spatial alignment network is able to attain comparable SR performance with the sophisticated optical flow-based approach, but requires a much less running time. Overall, our proposed model with learned temporal dynamics is shown to achieve the state-of-the-art SR results in terms of not only spatial consistency but also the temporal coherence on public video data sets. More information can be found in http://www.ifp.illinois.edu/~dingliu2/videoSR/.", "title": "" }, { "docid": "c27fb42cf33399c9c84245eeda72dd46", "text": "The proliferation of technology has empowered the web applications. At the same time, the presences of Cross-Site Scripting (XSS) vulnerabilities in web applications have become a major concern for all. Despite the many current detection and prevention approaches, attackers are exploiting XSS vulnerabilities continuously and causing significant harm to the web users. In this paper, we formulate the detection of XSS vulnerabilities as a prediction model based classification problem. A novel approach based on text-mining and pattern-matching techniques is proposed to extract a set of features from source code files. The extracted features are used to build prediction models, which can discriminate the vulnerable code files from the benign ones. The efficiency of the developed models is evaluated on a publicly available labeled dataset that contains 9408 PHP labeled (i.e. safe, unsafe) source code files. The experimental results depict the superiority of the proposed approach over existing ones.", "title": "" }, { "docid": "b4796891108f41b1faf054636d3eefd2", "text": "Business process analysis ranges from model verification at design-time to the monitoring of processes at runtime. Much progress has been achieved in process verificatio n. Today we are able to verify the entire reference model of SAP without any problems. Moreover, more and more pr ocesses leave their “trail” in the form of event logs. This makes it interesting to apply process minin g to these logs. Interestingly, practical applications of process mining reveal that reality is often quite differe nt from the idealized models, also referred to as “PowerPoint reality”. Future process-aware information s ystems will need to provide full support of the entire life-cycle of business processes. Recent results in busine s process analysis show that this is indeed possible, e.g., the possibilities offered by process mining tools suc h as ProM are breathtaking both from a scientific and practical perspective.", "title": "" }, { "docid": "76071bd6bf0874191e2cdd3b491dc6c6", "text": "Steganography is collection of methods to hide secret information (“payload”) within non-secret information (“container”). Its counterpart, Steganalysis, is the practice of determining if a message contains a hidden payload, and recovering it if possible. Presence of hidden payloads is typically detected by a binary classifier. In the present study, we propose a new model for generating image-like containers based on Deep Convolutional Generative Adversarial Networks (DCGAN). This approach allows to generate more setganalysis-secure message embedding using standard steganography algorithms. Experiment results demonstrate that the new model successfully deceives the steganography analyzer, and for this reason, can be used in steganographic applications.", "title": "" }, { "docid": "3132db67005f04591f93e77a2855caab", "text": "Money laundering refers to activities pertaining to hiding the true income, evading taxes, or converting illegally earned money for normal use. These activities are often performed through shell companies that masquerade as real companies but where actual the purpose is to launder money. Shell companies are used in all the three phases of money laundering, namely, placement, layering, and integration, often simultaneously. In this paper, we aim to identify shell companies. We propose to use only bank transactions since that is easily available. In particular, we look at all incoming and outgoing transactions from a particular bank account along with its various attributes, and use anomaly detection techniques to identify the accounts that pertain to shell companies. Our aim is to create an initial list of potential shell company candidates which can be investigated by financial experts later. Due to lack of real data, we propose a banking transactions simulator (BTS) to simulate both honest as well as shell company transactions by studying a host of actual real-world fraud cases. We apply anomaly detection algorithms to detect candidate shell companies. Results indicate that we are able to identify the shell companies with a high degree of precision and recall.1", "title": "" }, { "docid": "dfcc6b34f008e4ea9d560b5da4826f4d", "text": "The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. Keywords—Gesture recognition, Kinect, shadow play animation, VRPN.", "title": "" }, { "docid": "94e2bfa218791199a59037f9ea882487", "text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.", "title": "" }, { "docid": "c41038d0e3cf34e8a1dcba07a86cce9a", "text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.", "title": "" }, { "docid": "1a962bcbd5b670e532d841a74c2fe724", "text": "In SCADA systems, there are many RTUs (Remote Terminal Units) are used for field data collection as well as sending data to master node through the communication system. In such case master node represents the collected data and enables manager to handle the remote controlling activities. The RTU is nothing but the unit of data acquisition in standalone manner. The processor used in RTU is vulnerable to random faults due to harsh environment around RTUs. Faults may lead to the failure of RTU unit and hence it becomes inaccessible for information acquisition. For long running methods, fault tolerance is major concern and research problem since from last two decades. Using the SCADA systems increase the problem of fault tolerance is becoming servered. To handle the faults in oreder to perform the message passing through all the layers of communication system fo the SCADA that time need the efficient fault tolerance. The faults like RTU, message passing layer faults in communication system etc. SCADA is nothing but one of application of MPI. The several techniques for the fault tolerance has been described for MPI which are utilized in different applications such as SCADA. The goal of this paper is to present the study over the different fault tolerance techniques which can be used to optimize the SCADA system availability by mitigating the faults in RTU devices and communication systems.", "title": "" }, { "docid": "f89107f7ae4a250af36630aba072b7a9", "text": "The new HTML5 standard provides much more access to client resources, such as user location and local data storage. Unfortunately, this greater access may create new security risks that potentially can yield new threats to user privacy and web attacks. One of these security risks lies with the HTML5 client-side database. It appears that data stored on the client file system is unencrypted. Therefore, any stored data might be at risk of exposure. This paper explains and performs a security investigation into how the data is stored on client local file systems. The investigation was undertaken using Firefox and Chrome web browsers, and Encase (a computer forensic tool), was used to examine the stored data. This paper describes how the data can be retrieved after an application deletes the client side database. Finally, based on our findings, we propose a solution to correct any potential issues and security risks, and recommend ways to store data securely on local file systems.", "title": "" }, { "docid": "e2762e01ccf8319c726f3702867eeb8e", "text": "Balance maintenance and upright posture recovery under unexpected environmental forces are key requirements for safe and successful co-existence of humanoid robots in normal human environments. In this paper we present a two-phase control strategy for robust balance maintenance under a force disturbance. The first phase, called the reflex phase, is designed to withstand the immediate effect of the force. The second phase is the recovery phase where the system is steered back to a statically stable “home” posture. The reflex control law employs angular momentum and is characterized by its counter-intuitive quality of “yielding” to the disturbance. The recovery control employs a general scheme of seeking to maximize the potential energy and is robust to local ground surface feature. Biomechanics literature indicates a similar strategy in play during human balance maintenance.", "title": "" }, { "docid": "bef119e43fcc9f2f0b50fdf521026680", "text": "Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution. 2009 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "dc1cfdda40b23849f11187ce890c8f8b", "text": "Controlled sharing of information is needed and desirable for many applications and is supported in operating systems by access control mechanisms. This paper shows how to extend programming languages to provide controlled sharing. The extension permits expression of access constraints on shared data. Access constraints can apply both to simple objects, and to objects that are components of larger objects, such as bank account records in a bank's data base. The constraints are stated declaratively, and can be enforced by static checking similar to type checking. The approach can be used to extend any strongly-typed language, but is particularly suitable for extending languages that support the notion of abstract data types.", "title": "" }, { "docid": "00e5acdfb1e388b149bc729a7af108ee", "text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574", "title": "" }, { "docid": "b1ef75c4a0dc481453fb68e94ec70cdc", "text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.", "title": "" }, { "docid": "6e4f0a770fe2a34f99957f252110b6bd", "text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.", "title": "" }, { "docid": "cad54b58e3dd47e1e92078519660e71d", "text": "Web images come in hand with valuable contextual information. Although this information has long been mined for various uses such as image annotation, clustering of images, inference of image semantic content, etc., insufficient attention has been given to address issues in mining this contextual information. In this paper, we propose a webpage segmentation algorithm targeting the extraction of web images and their contextual information based on their characteristics as they appear on webpages. We conducted a user study to obtain a human-labeled dataset to validate the effectiveness of our method and experiments demonstrated that our method can achieve better results compared to an existing segmentation algorithm.", "title": "" }, { "docid": "7df97d3a5c393053b22255a0414e574a", "text": "Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex u , a pair of edge-disjoint paths from s to u of minimum total edge cost. Suurballe has given an O(n2 1ogn)-time algorithm for this problem. We give an implementation of Suurballe’s algorithm that runs in O(m log(, +,+)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.", "title": "" }, { "docid": "5d9112213e6828d5668ac4a33d4582f9", "text": "This paper describes four patients whose chief symptoms were steatorrhoea and loss of weight. Despite the absence of a history of abdominal pain investigations showed that these patients had chronic pancreatitis, which responded to medical treatment. The pathological findings in two of these cases and in six which came to necropsy are reported.", "title": "" } ]
scidocsrr
e36e96392a43f1abbd82341feee681d5
Blockchain based trust & authentication for decentralized sensor networks
[ { "docid": "9f6e103a331ab52b303a12779d0d5ef6", "text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.", "title": "" } ]
[ { "docid": "a400a4c5c108b1c3bfff999429fd9478", "text": "Chemical genetic studies on acetyl-CoA carboxylases (ACCs), rate-limiting enzymes in long chain fatty acid biosynthesis, have greatly advanced the understanding of their biochemistry and molecular biology and promoted the use of ACCs as targets for herbicides in agriculture and for development of drugs for diabetes, obesity and cancers. In mammals, ACCs have both biotin carboxylase (BC) and carboxyltransferase (CT) activity, catalyzing carboxylation of acetyl-CoA to malonyl-CoA. Several classes of small chemicals modulate ACC activity, including cellular metabolites, natural compounds, and chemically synthesized products. This article reviews chemical genetic studies of ACCs and the use of ACCs for targeted therapy of cancers.", "title": "" }, { "docid": "77f0b791691135b90cf231d6061a0a5f", "text": "The hyperlink structure of Wikipedia forms a rich semantic network connecting entities and concepts, enabling it as a valuable source for knowledge harvesting. Wikipedia, as crowd-sourced data, faces various data quality issues which significantly impacts knowledge systems depending on it as the information source. One such issue occurs when an anchor text in a Wikipage links to a wrong Wikipage, causing the error link problem. While much of previous work has focused on leveraging Wikipedia for entity linking, little has been done to detect error links.\n In this paper, we address the error link problem, and propose algorithms to detect and correct error links. We introduce an efficient method to generate candidate error links based on iterative ranking in an Anchor Text Semantic Network. This greatly reduces the problem space. A more accurate pairwise learning model was used to detect error links from the reduced candidate error link set, while suggesting correct links in the same time. This approach is effective when data sparsity is a challenging issue. The experiments on both English and Chinese Wikipedia illustrate the effectiveness of our approach. We also provide a preliminary analysis on possible causes of error links in English and Chinese Wikipedia.", "title": "" }, { "docid": "d2a4efcd82d2c55fe243de6d023c5013", "text": "This paper examines a popular stock message board and finds slight daily predictability using supervised learning algorithms when combining daily sentiment with historical price information. Additionally, with the profit potential in trading stocks, it is of no surprise that a number of popular financial websites are attempting to capture investor sentiment by providing an aggregate of this negative and positive online emotion. We question if the existence of dishonest posters are capitalizing on the popularity of the boards by writing sentiment in line with their trading goals as a means of influencing others, and therefore undermining the purpose of the boards. We exclude these posters to determine if predictability increases, but find no discernible difference.", "title": "" }, { "docid": "1fd8b2621cdac10dcbaf1dd4b46f4aaf", "text": "BACKGROUND\nOf the few exercise intervention studies focusing on pediatric populations, none have confined the intervention to the scheduled physical education curriculum.\n\n\nOBJECTIVE\nTo examine the effect of an 8-month school-based jumping program on the change in areal bone mineral density (aBMD), in grams per square centimeter, of healthy third- and fourth-grade children.\n\n\nSTUDY DESIGN\nTen elementary schools were randomized to exercise (n = 63) and control groups (n = 81). Exercise groups did 10 tuck jumps 3 times weekly and incorporated jumping, hopping, and skipping into twice weekly physical education classes. Control groups did regular physical education classes. At baseline and after 8 months of intervention, we measured aBMD and lean and fat mass by dual-energy x-ray absorptiometry (Hologic QDR-4500). Calcium intake, physical activity, and maturity were estimated by questionnaire.\n\n\nRESULTS\nThe exercise group showed significantly greater change in femoral trochanteric aBMD (4.4% vs 3.2%; P <.05). There were no group differences at other sites. Results were similar after controlling for covariates (baseline aBMD change in height, change in lean, calcium, physical activity, sex, and ethnicity) in hierarchical regression.\n\n\nCONCLUSIONS\nAn easily implemented school-based jumping intervention augments aBMD at the trochanteric region in the prepubertal and early pubertal skeleton.", "title": "" }, { "docid": "9228218e663951e54f31d697997c80f9", "text": "In this paper, we describe a simple set of \"recipes\" for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.", "title": "" }, { "docid": "9836c71624933bb2edde6d30ab1b6273", "text": "Many people believe that sexual orientation (homosexuality vs. heterosexuality) is determined by education and social constraints. There are, however, a large number of studies indicating that prenatal factors have an important influence on this critical feature of human sexuality. Sexual orientation is a sexually differentiated trait (over 90% of men are attracted to women and vice versa). In animals and men, many sexually differentiated characteristics are organized during early life by sex steroids, and one can wonder whether the same mechanism also affects human sexual orientation. Two types of evidence support this notion. First, multiple sexually differentiated behavioral, physiological, or even morphological traits are significantly different in homosexual and heterosexual populations. Because some of these traits are known to be organized by prenatal steroids, including testosterone, these differences suggest that homosexual subjects were, on average, exposed to atypical endocrine conditions during development. Second, clinical conditions associated with significant endocrine changes during embryonic life often result in an increased incidence of homosexuality. It seems therefore that the prenatal endocrine environment has a significant influence on human sexual orientation but a large fraction of the variance in this behavioral characteristic remains unexplained to date. Genetic differences affecting behavior either in a direct manner or by changing embryonic hormone secretion or action may also be involved. How these biological prenatal factors interact with postnatal social factors to determine life-long sexual orientation remains to be determined.", "title": "" }, { "docid": "ccff1c7fa149a033b49c3a6330d4e0f3", "text": "Stroke is the leading cause of permanent adult disability in the U.S., frequently resulting in chronic motor impairments. Rehabilitation of the upper limb, particularly the hand, is especially important as arm and hand deficits post-stroke limit the performance of activities of daily living and, subsequently, functional independence. Hand rehabilitation is challenging due to the complexity of motor control of the hand. New instrumentation is needed to facilitate examination of the hand. Thus, a novel actuated exoskeleton for the index finger, the FingerBot, was developed to permit the study of finger kinetics and kinematics under a variety of conditions. Two such novel environments, one applying a spring-like extension torque proportional to angular displacement at each finger joint and another applying a constant extension torque at each joint, were compared in 10 stroke survivors with the FingerBot. Subjects attempted to reach targets located throughout the finger workspace. The constant extension torque assistance resulted in a greater workspace area (p < 0.02) and a larger active range of motion for the metacarpophalangeal joint (p < 0.01) than the spring-like assistance. Additionally, accuracy in terms of reaching the target was greater with the constant extension assistance as compared to no assistance. The FingerBot can be a valuable tool in assessing various hand rehabilitation paradigms following stroke.", "title": "" }, { "docid": "29e030bb4d8547d7615b8e3d17ec843d", "text": "This Paper examines the enforcement of occupational safety and health (OSH) regulations; it validates the state of enforcement of OSH regulations by extracting the salient issues that influence enforcement of OSH regulations in Nigeria. It’s the duty of the Federal Ministry of Labour and Productivity (Inspectorate Division) to enforce the Factories Act of 1990, while the Labour, Safety, Health and Welfare Bill of 2012 empowers the National Council for Occupational Safety and Health of Nigeria to administer the proceeding regulations on its behalf. Sadly enough, the impact of the enforcement authority is ineffective, as the key stakeholders pay less attention to OSH regulations; thus, rendering the OSH scheme dysfunctional and unenforceable, at the same time impeding OSH development. For optimum OSH in Nigeria, maximum enforcement and compliance with the regulations must be in place. This paper, which is based on conceptual analysis, reviews literature gathered through desk literature search. It identified issues to OSH enforcement such as: political influence, bribery and corruption, insecurity, lack of governmental commitment, inadequate legislation inter alia. While recommending ways to improve the enforcement of OSH regulations, it states that self-regulatory style of enforcing OSH regulations should be adopted by organisations. It also recommends that more OSH inspectors be recruited; local government authorities empowered to facilitate the enforcement of OSH regulations. Moreover, the study encourages organisations to champion OSH enforcement, as it is beneficial to them; it concludes that the burden of OSH improvement in Nigeria is on the government, educational authorities, organisations and trade unions.", "title": "" }, { "docid": "1e8195deeecb793c65b02924f2da3ef2", "text": "This paper provides an introductory survey of a class of optimization problems known as bilevel programming. We motivate this class through a simple application, and then proceed with the general formulation of bilevel programs. We consider various cases (linear, linear-quadratic, nonlinear), describe their main properties and give an overview of solution approaches.", "title": "" }, { "docid": "1dd1d5304cad393ade793b3435858ce4", "text": "With today‘s ubiquity and popularity of social network applications, the ability to analyze and understand large networks in an ef cient manner becomes critically important. However, as networks become larger and more complex, reasoning about social dynamics via simple statistics is not a feasible option. To overcome these limitations, we can rely on visual metaphors. Visualization nowadays is no longer a passive process that produces images from a set of numbers. Recent years have witnessed a convergence of social network analytics and visualization, coupled with interaction, that is changing the way analysts understand and characterize social networks. In this chapter, we discuss the main goal of visualization and how different metaphors are aimed towards elucidating different aspects of social networks, such as structure and semantics. We also describe a number of methods where analytics and visualization are interwoven towards providing a better comprehension of social structure and dynamics.", "title": "" }, { "docid": "89a00db08d8a439ab1528943c38904b2", "text": "In biomedical application, conventional hard robots have been widely used for a long time. However, when they come in contact with human body especially for rehabilitation purposes, the hard and stuff nature of the robots have received a lot of drawbacks as they interfere with movement. Recently, soft robots are drawing attention due to their high customizability and compliance, especially soft actuators. In this paper, we present a soft pneumatic bending actuator and characterize the performance of the actuator such as radius of curvature and force output during actuation. The characterization was done by a simple measurement system that we developed. This work serves as a guideline for designing soft bending actuators with application-specific requirements, for example, soft exoskeleton for rehabilitation. Keywords— Soft Robots, Actuators, Wearable, Hand Exoskeleton, Rehabilitation.", "title": "" }, { "docid": "7265c5e3f64b0a19592e7b475649433c", "text": "A power transformer outage has a dramatic financial consequence not only for electric power systems utilities but also for interconnected customers. The service reliability of this important asset largely depends upon the condition of the oil-paper insulation. Therefore, by keeping the qualities of oil-paper insulation system in pristine condition, the maintenance planners can reduce the decline rate of internal faults. Accurate diagnostic methods for analyzing the condition of transformers are therefore essential. Currently, there are various electrical and physicochemical diagnostic techniques available for insulation condition monitoring of power transformers. This paper is aimed at the description, analysis and interpretation of modern physicochemical diagnostics techniques for assessing insulation condition in aged transformers. Since fields and laboratory experiences have shown that transformer oil contains about 70% of diagnostic information, the physicochemical analyses of oil samples can therefore be extremely useful in monitoring the condition of power transformers.", "title": "" }, { "docid": "bb547f90a98aa25d0824dc63b9de952d", "text": "When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.", "title": "" }, { "docid": "16a18f742d67e4dfb660b4ce3b660811", "text": "Container-based virtualization has become the de-facto standard for deploying applications in data centers. However, deployed containers frequently include a wide-range of tools (e.g., debuggers) that are not required for applications in the common use-case, but they are included for rare occasions such as in-production debugging. As a consequence, containers are significantly larger than necessary for the common case, thus increasing the build and deployment time. CNTR1 provides the performance benefits of lightweight containers and the functionality of large containers by splitting the traditional container image into two parts: the “fat” image — containing the tools, and the “slim” image — containing the main application. At run-time, CNTR allows the user to efficiently deploy the “slim” image and then expand it with additional tools, when and if necessary, by dynamically attaching the “fat” image. To achieve this, CNTR transparently combines the two container images using a new nested namespace, without any modification to the application, the container manager, or the operating system. We have implemented CNTR in Rust, using FUSE, and incorporated a range of optimizations. CNTR supports the full Linux filesystem API, and it is compatible with all container implementations (i.e., Docker, rkt, LXC, systemd-nspawn). Through extensive evaluation, we show that CNTR incurs reasonable performance overhead while reducing, on average, by 66.6% the image size of the Top-50 images available on Docker Hub.", "title": "" }, { "docid": "6037693a098f8f2713b2316c75447a50", "text": "Presently, monoclonal antibodies (mAbs) therapeutics have big global sales and are starting to receive competition from biosimilars. We previously reported that the nano-surface and molecular-orientation limited (nSMOL) proteolysis which is optimal method for bioanalysis of antibody drugs in plasma. The nSMOL is a Fab-selective limited proteolysis, which utilize the difference of protease nanoparticle diameter (200 nm) and antibody resin pore diameter (100 nm). In this report, we have demonstrated that the full validation for chimeric antibody Rituximab bioanalysis in human plasma using nSMOL proteolysis. The immunoglobulin fraction was collected using Protein A resin from plasma, which was then followed by the nSMOL proteolysis using the FG nanoparticle-immobilized trypsin under a nondenaturing condition at 50°C for 6 h. After removal of resin and nanoparticles, Rituximab signature peptides (GLEWIGAIYPGNGDTSYNQK, ASGYTFTSYNMHWVK, and FSGSGSGTSYSLTISR) including complementarity-determining region (CDR) and internal standard P14R were simultaneously quantified by multiple reaction monitoring (MRM). This quantification of Rituximab using nSMOL proteolysis showed lower limit of quantification (LLOQ) of 0.586 µg/mL and linearity of 0.586 to 300 µg/mL. The intra- and inter-assay precision of LLOQ, low quality control (LQC), middle quality control (MQC), and high quality control (HQC) was 5.45-12.9% and 11.8, 5.77-8.84% and 9.22, 2.58-6.39 and 6.48%, and 2.69-7.29 and 4.77%, respectively. These results indicate that nSMOL can be applied to clinical pharmacokinetics study of Rituximab, based on the precise analysis.", "title": "" }, { "docid": "2438479795a9673c36138212b61c6d88", "text": "Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning.", "title": "" }, { "docid": "fdb0c8d2a4c4bbe68b7cffe58adbd074", "text": "Endowing a chatbot with personality is challenging but significant to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified personality or profile. We present a method that uses generic conversation data from social media (without speaker identities) to generate profile-coherent responses. The central idea is to detect whether a profile should be used when responding to a user post (by a profile detector), and if necessary, select a key-value pair from the profile to generate a response forward and backward (by a bidirectional decoder) so that a personalitycoherent response can be generated. Furthermore, in order to train the bidirectional decoder with generic dialogue data, a position detector is designed to predict a word position from which decoding should start given a profile value. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.", "title": "" }, { "docid": "0a3cac4df8679fcc9b53a32b3dcaa695", "text": "This paper describes the design of a simple, low-cost microcontroller based heart rate measuring device with LCD output. Heart rate of the subject is measured from the finger using optical sensors and the rate is then averaged and displayed on a text based LCD.", "title": "" }, { "docid": "1deeae749259ff732ad3206dc4a7e621", "text": "In traditional active learning, there is only one labeler that always returns the ground truth of queried labels. However, in many applications, multiple labelers are available to offer diverse qualities of labeling with different costs. In this paper, we perform active selection on both instances and labelers, aiming to improve the classification model most with the lowest cost. While the cost of a labeler is proportional to its overall labeling quality, we also observe that different labelers usually have diverse expertise, and thus it is likely that labelers with a low overall quality can provide accurate labels on some specific instances. Based on this fact, we propose a novel active selection criterion to evaluate the cost-effectiveness of instance-labeler pairs, which ensures that the selected instance is helpful for improving the classification model, and meanwhile the selected labeler can provide an accurate label for the instance with a relative low cost. Experiments on both UCI and real crowdsourcing data sets demonstrate the superiority of our proposed approach on selecting cost-effective queries.", "title": "" }, { "docid": "101bcd956dcdb0fff3ecf78aa841314a", "text": "HCI research has increasingly examined how sensing technologies can help people capture and visualize data about their health-related behaviors. Yet, few systems help people reflect more fundamentally on the factors that influence behaviors such as physical activity (PA). To address this research gap, we take a novel approach, examining how such reflections can be stimulated through a medium that generations of families have used for reflection and teaching: storytelling. Through observations and interviews, we studied how 13 families interacted with a low-fidelity prototype, and their attitudes towards this tool. Our prototype used storytelling and interactive prompts to scaffold reflection on factors that impact children's PA. We contribute to HCI research by characterizing how families interacted with a story-driven reflection tool, and how such a tool can encourage critical processes for behavior change. Informed by the Transtheoretical Model, we present design implications for reflective informatics systems.", "title": "" } ]
scidocsrr
39d3a7ae2678036aae7f582eb9f0db1a
Entropy-based Selection of Graph Cuboids
[ { "docid": "bf14f996f9013351aca1e9935157c0e3", "text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.", "title": "" } ]
[ { "docid": "e3ae049bd1cecbde679acdefc4ad0758", "text": "Beneficial plant–microbe interactions in the rhizosphere are primary determinants of plant health and soil fertility. Arbuscular mycorrhizas are the most important microbial symbioses for the majority of plants and, under conditions of P-limitation, influence plant community development, nutrient uptake, water relations and above-ground productivity. They also act as bioprotectants against pathogens and toxic stresses. This review discusses the mechanism by which these benefits are conferred through abiotic and biotic interactions in the rhizosphere. Attention is paid to the conservation of biodiversity in arbuscular mycorrhizal fungi (AMF). Examples are provided in which the ecology of AMF has been taken into account and has had an impact in landscape regeneration, horticulture, alleviation of desertification and in the bioremediation of contaminated soils. It is vital that soil scientists and agriculturalists pay due attention to the management of AMF in any schemes to increase, restore or maintain soil fertility.", "title": "" }, { "docid": "07447829f6294660359219c2310968b6", "text": "Caudal duplication (dipygus) is an uncommon pathologic of conjoined twinning. The conjoined malformation is classified according to the nature and site of the union. We report the presence of this malformation in a female crossbreed puppy. The puppy was delivered by caesarean section following a prolonged period of dystocia. External findings showed a single head (monocephalus) and a normal cranium with no fissure in the medial line detected. The thorax displayed a caudal duplication arising from the lumbosacral region (rachipagus). The puppy had three upper limbs, a right and left, and a third limb in the dorsal region where the bifurcation began. The subsequent caudal duplication appeared symmetrical. Necropsy revealed internal abnormalities consisting of a complete duplication of the urogenital system and a duplication of the large intestines arising from a bifurcation of the caudal ileum . Considering the morphophysiological description the malformation described would be classified as the first case in the dog of a monocephalusrachipagustribrachius tetrapus.", "title": "" }, { "docid": "3608939d057889c2731b12194ef28ea6", "text": "Permanent magnets with rare earth materials are widely used in interior permanent magnet synchronous motors (IPMSMs) in Hybrid Electric Vehicles (HEVs). The recent price rise of rare earth materials has become a serious concern. A Switched Reluctance Motor (SRM) is one of the candidates for HEV rare-earth-free-motors. An SRM has been developed with dimensions, maximum torque, operating area, and maximum efficiency that all compete with the IPMSM. The efficiency map of the SRM is different from that of the IPMSM; thus, direct comparison has been rather difficult. In this paper, a comparison of energy consumption between the SRM and the IPMSM using four standard driving schedules is carried out. In HWFET and NEDC driving schedules, the SRM is found to have better efficiency because its efficiency is high at the high-rotational-speed region.", "title": "" }, { "docid": "1e6167b15cc904131582beaaf9eb6051", "text": "Using fully homomorphic encryption scheme, we construct fully homomorphic encryption scheme FHE4GT that can homomorphically compute an encryption of the greater-than bit that indicates x > x' or not, given two ciphertexts c and c' of x and x', respectively, without knowing the secret key. Then, we construct homomorphic classifier homClassify that can homomorphically classify a given encrypted data without decrypting it, using machine learned parameters.", "title": "" }, { "docid": "d4ea09e7c942174c0301441a5c53b4ef", "text": "As the cloud computing is a new style of computing over internet. It has many advantages along with some crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related with the load management, fault tolerance and different security issues in cloud environment. In this paper the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all the processor in the system or every node in the network does approximately the equal amount of work at any instant of time. Many methods to resolve this problem has been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony optimization to resolve the problem of load balancing in cloud environment.", "title": "" }, { "docid": "54ab143dc18413c58c20612dbae142eb", "text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.", "title": "" }, { "docid": "405e5d6050adec3cc6e60a4e64b1e0a5", "text": "The ARCS Motivation Theory was proposed to guide instructional designers and teachers who develop their own instruction to integrate motivational design strategies into the instruction. There is a lack of literature supporting the idea that instruction for blended courses if designed based on the ARCS Motivation Theory provides different experiences for learners in terms of motivation than instruction developed following the standard instructional design procedure for blended courses. This study was conducted to compare the students‘ motivational evaluation of blended course modules developed based on the ARCS Motivation Theory and students‘ motivational evaluation of blended course modules developed following the standard instructional design procedure. Randomly assigned fifty junior undergraduate students studying at the department of Turkish Language and Literature participated in the study. Motivation Measure for the Blended Course Instruction (MMBCI) instrument was used to collect data for the study after the Confirmatory Factor Analysis (CFA). Results of the study indicated that designing instruction in blended courses based on the ARCS Motivation Theory provides more motivational benefits for students and consequently contributes student learning.", "title": "" }, { "docid": "c1b1fe329296d4996f741b9e2ae558ac", "text": "In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks.", "title": "" }, { "docid": "0ef3d7b26feba199df7d466d14740a57", "text": "A parsing algorithm visualizer is a tool that visualizes the construction of a parser for a given context-free grammar and then illustrates the use of that parser to parse a given string. Parsing algorithm visualizers are used to teach the course on compiler construction which in invariably included in all undergraduate computer science curricula. This paper presents a new parsing algorithm visualizer that can visualize six parsing algorithms, viz. predictive parsing, simple LR parsing, canonical LR parsing, look-ahead LR parsing, Earley parsing and CYK parsing. The tool logically explains the process of parsing showing the calculations involved in each step. The output of the tool has been structured to maximize the learning outcomes and contains important constructs like FIRST and FOLLOW sets, item sets, parsing table, parse tree and leftmost or rightmost derivation depending on the algorithm being visualized. The tool has been used to teach the course on compiler construction at both undergraduate and graduate levels. An overall positive feedback was received from the students with 89% of them saying that the tool helped them in understanding the parsing algorithms. The tool is capable of visualizing multiple parsing algorithms and 88% students used it to compare the algorithms.", "title": "" }, { "docid": "c67ffe3dfa6f0fe0449f13f1feb20300", "text": "The associations between giving a history of physical, emotional, and sexual abuse in children and a range of mental health, interpersonal, and sexual problems in adult life were examined in a community sample of women. Abuse was defined to establish groups giving histories of unequivocal victimization. A history of any form of abuse was associated with increased rates of psychopathology, sexual difficulties, decreased self-esteem, and interpersonal problems. The similarities between the three forms of abuse in terms of their association with negative adult outcomes was more apparent than any differences, though there was a trend for sexual abuse to be particularly associated to sexual problems, emotional abuse to low self-esteem, and physical abuse to marital breakdown. Abuse of all types was more frequent in those from disturbed and disrupted family backgrounds. The background factors associated with reports of abuse were themselves often associated to the same range of negative adult outcomes as for abuse. Logistic regressions indicated that some, though not all, of the apparent associations between abuse and adult problems was accounted for by this matrix of childhood disadvantage from which abuse so often emerged.", "title": "" }, { "docid": "abc48ae19e2ea1e1bb296ff0ccd492a2", "text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also", "title": "" }, { "docid": "a58cbbff744568ae7abd2873d04d48e9", "text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.", "title": "" }, { "docid": "a6b29716a299415fd88289032acf7d3d", "text": "As Internet grows quickly, pornography, which is often printed into a small quantity of publication in the past, becomes one of the highly distributed information over Internet. However, pornography may be harmful to children, and may affect the efficiency of workers. In this paper, we design an easy scheme for detecting pornography. We exploit primitive information from pornography and use this knowledge for determining whether a given photo belongs to pornography or not. In the beginning, we extract skin region from photos, and find out the correlation in skin region and non-skin region. Then, we use these correlations as the input of support vector machine (SVM), an excellent tool for classification with learning abilities. After a period of training SVM model, we achieved about 75% of accuracy, 35% of false alarm rate, and only 14% of mis-detection rate. Moreover, we also provide a simple tool based on our scheme.", "title": "" }, { "docid": "5bf172cfc7d7de0c82707889cf722ab2", "text": "The concept of a decentralized ledger usually implies that each node of a blockchain network stores the entire blockchain. However, in the case of popular blockchains, which each weigh several hundreds of GB, the large amount of data to be stored can incite new or low-capacity nodes to run lightweight clients. Such nodes do not participate to the global storage effort and can result in a centralization of the blockchain by very few nodes, which is contrary to the basic concepts of a blockchain. To avoid this problem, we propose new low storage nodes that store a reduced amount of data generated from the blockchain by using erasure codes. The properties of this technique ensure that any block of the chain can be easily rebuilt from a small number of such nodes. This system should encourage low storage nodes to contribute to the storage of the blockchain and to maintain decentralization despite of a globally increasing size of the blockchain. This system paves the way to new types of blockchains which would only be managed by low capacity nodes.", "title": "" }, { "docid": "d35736158d3f38503f0f2090c4e47811", "text": "This study examines the role of the decision environment in how well business intelligence (BI) capabilities are leveraged to achieve BI success. We examine the decision environment in terms of the types of decisions made and the information processing needs of the organization. Our findings suggest that technological capabilities such as data quality, user access and the integration of BI with other systems are necessary for BI success, regardless of the decision environment. However, the decision environment does influence the relationship between BI success and capabilities, such as the extent to which BI supports flexibility and risk in decision making. 2013 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +32 16248854. E-mail addresses: oyku.isik@vlerick.com (Ö. Işık), mary.jones@unt.edu (M.C. Jones), anna.sidorova@unt.edu (A. Sidorova).", "title": "" }, { "docid": "22c3eb9aa0127e687f6ebb6994fc8d1d", "text": "In this paper, the novel inverse synthetic aperture secondary radar wireless positioning technique is introduced. The proposed concept allows for a precise spatial localization of a backscatter transponder even in dense multipath environments. A novel secondary radar signal evaluation concept compensates for the unknown modulation phase of the returned signal and thus leads to radar signals comparable to common primary radar. With use of this concept, inverse synthetic aperture radar algorithms can be applied to the signals of backscatter transponder systems. In simulations and first experiments, we used a broadband holographic reconstruction principle to realize the inverse synthetic aperture approach. The movement of the transponder along a short arbitrary aperture path is determined with assisting relative sensors (dead reckoning or inertia sensors). A set of signals measured along the aperture is adaptively focused to the transponder position. By this focusing technique, multipath reflections can be suppressed impressively and a precise indoor positioning becomes feasible. With our technique, completely new and powerful options for integrated navigation and sensor fusion in RF identification systems and wireless local positioning systems are now possible.", "title": "" }, { "docid": "20ca4823a5bb5388404e509cb558fae9", "text": "Developing learning experiences that facilitate self-actualization and creativity is among the most important goals of our society in preparation for the future. To facilitate deep understanding of a new concept, to facilitate learning, learners must have the opportunity to develop multiple and flexible perspectives. The process of becoming an expert involves failure, as well as the ability to understand failure and the motivation to move onward. Meta-cognitive awareness and personal strategies can play a role in developing an individual’s ability to persevere through failure, and combat other diluting influences. Awareness and reflective technologies can be instrumental in developing a meta-cognitive ability to make conscious and unconscious decisions about engagement that will ultimately enhance learning, expertise, creativity, and self-actualization. This paper will review diverse perspectives from psychology, engineering, education, and computer science to present opportunities to enhance creativity, motivation, and self-actualization in learning systems. r 2005 Published by Elsevier Ltd.", "title": "" }, { "docid": "9bcf4fcb795ab4cfe4e9d2a447179feb", "text": "In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. The nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. Acting on the hypothesis that the “inputs” into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in detect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there must be other factors which need to be identified.", "title": "" }, { "docid": "57167d5bf02e9c76057daa83d3f803c5", "text": "When alcohol is consumed, the alcoholic beverages first pass through the various segments of the gastrointestinal (GI) tract. Accordingly, alcohol may interfere with the structure as well as the function of GI-tract segments. For example, alcohol can impair the function of the muscles separating the esophagus from the stomach, thereby favoring the occurrence of heartburn. Alcohol-induced damage to the mucosal lining of the esophagus also increases the risk of esophageal cancer. In the stomach, alcohol interferes with gastric acid secretion and with the activity of the muscles surrounding the stomach. Similarly, alcohol may impair the muscle movement in the small and large intestines, contributing to the diarrhea frequently observed in alcoholics. Moreover, alcohol inhibits the absorption of nutrients in the small intestine and increases the transport of toxins across the intestinal walls, effects that may contribute to the development of alcohol-related damage to the liver and other organs.", "title": "" }, { "docid": "229a541fa4b8e9157c8cc057ae028676", "text": "The proposed system introduces a new genetic algorithm for prediction of financial performance with input data sets from a financial domain. The goal is to produce a GA-based methodology for prediction of stock market performance along with an associative classifier from numerical data. This work restricts the numerical data to stock trading data. Stock trading data contains the quotes of stock market. From this information, many technical indicators can be extracted, and by investigating the relations between these indicators trading signals can discovered. Genetic algorithm is being used to generate all the optimized relations among the technical indicator and its value. Along with genetic algorithm association rule mining algorithm is used for generation of association rules among the various Technical Indicators. Associative rules are generated whose left side contains a set of trading signals, expressed by relations among the technical indicators, and whose right side indicates whether there is a positive ,negative or no change. The rules are being further given to the classification process which will be able to classify the new data making use of the previously generated rules. The proposed idea in the paper is to offer an efficient genetic algorithm in combination with the association rule mining algorithm which predicts stock market performance. Keywords— Genetic Algorithm, Associative Rule Mining, Technical Indicators, Associative rules, Stock Market, Numerical Data, Rules INTRODUCTION Over the last decades, there has been much research interests directed at understanding and predicting future. Among them, to forecast price movements in stock markets is a major challenge confronting investors, speculator and businesses. How to make a right decision in stock trading extracts many attentions from many financial and technical fields. Many technologies such as evolutionary optimization methods have been studied to help people find better way to earn more profit from the stock market. And the data mining method shows its power to improve the accuracy of stock movement prediction, with which more profit can be obtained with less risk. Applications of data mining techniques for stock investment include clustering, decision tree etc. Moreover, researches on stock market discover trading signals and timings from financial data. Because of the numerical attributes used, data mining techniques, such as decision tree, have weaker capabilities to handle this kind of numerical data and there are infinitely many possible ways to enumerate relations among data. Stock prices depend on various factors, the important ones being the market sentiment, performance of the industry, earning results and projected earnings, takeover or merger, introduction of a new product or introduction of an existing product into new markets, share buy-back, announcements of dividends/bonuses, addition or removal from the index and such other factors leading to a positive or negative impact on the share price and the associated volumes. Apart from the basic technical and fundamental analysis techniques used in stock market analysis and prediction, soft computing methods based on Association Rule Mining, fuzzy logic, neural networks, genetic algorithms etc. are increasingly finding their place in understanding and predicting the financial markets. Genetic algorithm has a great capability to discover good solutions rapidly for difficult high dimensional problems. The genetic algorithm has good capability to deal with numerical data and relations between numerical data. Genetic algorithms have emerged as a powerful general purpose search and optimization technique and have found applications in widespread areas. Associative classification, one of the most important tasks in data mining and knowledge discovery, builds a classification system based on associative classification rules. Association rules are learned and extracted from the available training dataset and the most suitable rules are selected to build an associative classification model. Association rule discovery has been used with great success in International Journal of Engineering Research and General Science Volume 3, Issue 1, January-February, 2015 ISSN 2091-273", "title": "" } ]
scidocsrr
169501ecb613c34287e0ff45354f5ad5
SALSA-TEXT : self attentive latent space based adversarial text generation
[ { "docid": "c81e823de071ae451420326e9fbb2e3d", "text": "Deep latent variable models, trained using variational autoencoders or generative adversarial networks, are now a key technique for representation learning of continuous structures. However, applying similar methods to discrete structures, such as text sequences or discretized images, has proven to be more challenging. In this work, we propose a flexible method for training deep latent variable models of discrete structures. Our approach is based on the recently-proposed Wasserstein autoencoder (WAE) which formalizes the adversarial autoencoder (AAE) as an optimal transport problem. We first extend this framework to model discrete sequences, and then further explore different learned priors targeting a controllable representation. This adversarially regularized autoencoder (ARAE) allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space. Finally we show that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.", "title": "" }, { "docid": "9b9181c7efd28b3e407b5a50f999840a", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" }, { "docid": "548e1962ac4a2ea36bf90db116c4ff49", "text": "LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.", "title": "" }, { "docid": "f4c8fa37408d5341c2b54f92f0dfff4f", "text": "Generative adversarial networks are an effective approach for learning rich latent representations of continuous data, but have proven difficult to apply directly to discrete structured data, such as text sequences or discretized images. Ideally we could encode discrete structures in a continuous code space to avoid this problem, but it is difficult to learn an appropriate general-purpose encoder. In this work, we consider a simple approach for handling these two challenges jointly, employing a discrete structure autoencoder with a code space regularized by generative adversarial training. The model learns a smooth regularized code space while still being able to model the underlying data, and can be used as a discrete GAN with the ability to generate coherent discrete outputs from continuous samples. We demonstrate empirically how key properties of the data are captured in the model’s latent space, and evaluate the model itself on the tasks of discrete image generation, text generation, and semi-supervised learning.", "title": "" } ]
[ { "docid": "ab2f1f27b11a5a41ff6b2b79bc044c2f", "text": "ABSTACT: Trajectory tracking has been an extremely active research area in robotics in the past decade.In this paper, a kinematic model of two wheel mobile robot for reference trajectory tracking is analyzed and simulated. For controlling the wheeled mobile robot PID controllers are used. For finding the optimal parameters of PID controllers, in this work particle swarm optimization (PSO) is used. The proposed methodology is shown to be a successful solutionfor solving the problem.", "title": "" }, { "docid": "88048217d8d052dbe1d2b74145be76b5", "text": "Human learners, including infants, are highly sensitive to structure in their environment. Statistical learning refers to the process of extracting this structure. A major question in language acquisition in the past few decades has been the extent to which infants use statistical learning mechanisms to acquire their native language. There have been many demonstrations showing infants' ability to extract structures in linguistic input, such as the transitional probability between adjacent elements. This paper reviews current research on how statistical learning contributes to language acquisition. Current research is extending the initial findings of infants' sensitivity to basic statistical information in many different directions, including investigating how infants represent regularities, learn about different levels of language, and integrate information across situations. These current directions emphasize studying statistical language learning in context: within language, within the infant learner, and within the environment as a whole. WIREs Cogn Sci 2010 1 906-914 This article is categorized under: Linguistics > Language Acquisition Psychology > Language.", "title": "" }, { "docid": "0e672586c4be2e07c3e794ed1bb3443d", "text": "In this thesis, the multi-category dataset has been incorporated with the robust feature descriptor using the scale invariant feature transform (SIFT), SURF and FREAK along with the multi-category enabled support vector machine (mSVM). The multi-category support vector machine (mSVM) has been designed with the iterative phases to make it able to work with the multi-category dataset. The mSVM represents the training samples of main class as the primary class in every iterative phase and all other training samples are categorized as the secondary class for the support vector machine classification. The proposed model is made capable of working with the variations in the indoor scene image dataset, which are noticed in the form of the color, texture, light, image orientation, occlusion and color illuminations. Several experiments have been conducted over the proposed model for the performance evaluation of the indoor scene recognition system in the proposed model. The results of the proposed model have been obtained in the form of the various performance parameters of statistical errors, precision, recall, F1-measure and overall accuracy. The proposed model has clearly outperformed the existing models in the terms of the overall accuracy. The proposed model improvement has been recorded higher than ten percent for all of the evaluated parameters against the existing models based upon SURF, FREAK, etc.", "title": "" }, { "docid": "8bb0077bf14426f02a6339dd1be5b7f2", "text": "Astrocytes are thought to play a variety of key roles in the adult brain, such as their participation in synaptic transmission, in wound healing upon brain injury, and adult neurogenesis. However, to elucidate these functions in vivo has been difficult because of the lack of astrocyte-specific gene targeting. Here we show that the inducible form of Cre (CreERT2) expressed in the locus of the astrocyte-specific glutamate transporter (GLAST) allows precisely timed gene deletion in adult astrocytes as well as radial glial cells at earlier developmental stages. Moreover, postnatal and adult neurogenesis can be targeted at different stages with high efficiency as it originates from astroglial cells. Taken together, this mouse line will allow dissecting the molecular pathways regulating the diverse functions of astrocytes as precursors, support cells, repair cells, and cells involved in neuronal information processing.", "title": "" }, { "docid": "5a1f4efc96538c1355a2742f323b7a0e", "text": "A great challenge in the proteomics and structural genomics era is to predict protein structure and function, including identification of those proteins that are partially or wholly unstructured. Disordered regions in proteins often contain short linear peptide motifs (e.g., SH3 ligands and targeting signals) that are important for protein function. We present here DisEMBL, a computational tool for prediction of disordered/unstructured regions within a protein sequence. As no clear definition of disorder exists, we have developed parameters based on several alternative definitions and introduced a new one based on the concept of \"hot loops,\" i.e., coils with high temperature factors. Avoiding potentially disordered segments in protein expression constructs can increase expression, foldability, and stability of the expressed protein. DisEMBL is thus useful for target selection and the design of constructs as needed for many biochemical studies, particularly structural biology and structural genomics projects. The tool is freely available via a web interface (http://dis.embl.de) and can be downloaded for use in large-scale studies.", "title": "" }, { "docid": "81bd2987a3c5c82379ef69a6f065b17f", "text": "Although accumulating evidence highlights a crucial role of the insular cortex in feelings, empathy and processing uncertainty in the context of decision making, neuroscientific models of affective learning and decision making have mostly focused on structures such as the amygdala and the striatum. Here, we propose a unifying model in which insula cortex supports different levels of representation of current and predictive states allowing for error-based learning of both feeling states and uncertainty. This information is then integrated in a general subjective feeling state which is modulated by individual preferences such as risk aversion and contextual appraisal. Such mechanisms could facilitate affective learning and regulation of body homeostasis, and could also guide decision making in complex and uncertain environments.", "title": "" }, { "docid": "b6af904a2746862d76a4588d050f093c", "text": "This paper presents a fast algorithm for smooth digital elevation model interpolation and approximation from scattered elevation data. The global surface is reconstructed by subdividing it into overlapping local subdomains using a perfectly balanced binary tree. In each tree leaf, a smooth local surface is reconstructed using radial basis functions. Finally a hierarchical blending is done to create the final C<sup>1</sup>-continuous surface using a family of functions called Partition of Unity. We present two terrain data sets and show that our method is robust since the number of data points in the Partition of Unity blending areas is explicitly specified.", "title": "" }, { "docid": "0d1e889a69ea17e43c5f65bac38bba79", "text": "In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.", "title": "" }, { "docid": "394e99bd9c0b3b5a0765f49f2fc38c53", "text": "We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.", "title": "" }, { "docid": "c451d86c6986fab1a1c4cd81e87e6952", "text": "Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.", "title": "" }, { "docid": "a7ca3ffcae09ad267281eb494532dc54", "text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.", "title": "" }, { "docid": "9f0cb11b8ec05933a10a9c82803f7ce4", "text": "From 2005 to 2012, injuries to children under five increased by 10%. Using the expansion of ATT’s 3G network, I find that smartphone adoption has a causal impact on child injuries. This effect is strongest amongst children ages 0-5, but not children ages 6-10, and in activities where parental supervision matters. I put this forward as indirect evidence that this increase is due to parents being distracted while supervising children, and not due to increased participation in accident-prone activities.", "title": "" }, { "docid": "5953dafaebde90a0f6af717883452d08", "text": "Compact high-voltage Marx generators have found wide ranging applications for driving resistive and capacitive loads. Parasitic or leakage capacitance in compact low-energy Marx systems has proved useful in driving resistive loads, but it can be detrimental when driving capacitive loads where it limits the efficiency of energy transfer to the load capacitance. In this paper, we show how manipulating network designs consisting of these parasitic elements along with internal and external components can optimize the performance of such systems.", "title": "" }, { "docid": "b492c624d1593515d55b3d9b6ac127a7", "text": "We introduce a type of Deep Boltzmann Machine (DBM) that is suitable for extracting distributed semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This enables an efficient pretraining algorithm and a state initialization scheme for fast inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks.", "title": "" }, { "docid": "39271e70afb7ea1b1876b57dfab1d745", "text": "This study examined the patterns or mechanism for conflict resolution in traditional African societies with particular reference to Yoruba and Igbo societies in Nigeria and Pondo tribe in South Africa. The paper notes that conflict resolution in traditional African societies provides opportunity to interact with the parties concerned, it promotes consensus-building, social bridge reconstructions and enactment of order in the society. The paper submits further that the western world placed more emphasis on the judicial system presided over by council of elders, kings’ courts, peoples (open place)", "title": "" }, { "docid": "275a5302219385f22706b483ecc77a74", "text": "This paper describes a bilingual text-to-speech (TTS) system, Microsoft Mulan, which switches between Mandarin and English smoothly and which maintains the sentence level intonation even for mixed-lingual texts. Mulan is constructed on the basis of the Soft Prediction Only prosodic strategy and the Prosodic-Constraint Orient unit-selection strategy. The unitselection module of Mulan is shared across languages. It is insensitive to language identity, even though the syllable is used as the smallest unit in Mandarin, and the phoneme in English. Mulan has a unique module, the language-dispatching module, which dispatches texts to the language-specific front-ends and merges the outputs of the two front-ends together. The mixed texts are “uttered” out with the same voice. According to our informal listening test, the speech synthesized with Mulan sounds quite natural. Sample waves can be heard at: http://research.microsoft.com/~echang/projects/tts/mulan.htm.", "title": "" }, { "docid": "d039154425d05fa996810b4a00364671", "text": "Community structure is an important area of research. It has received a considerable attention from the scientific community. Despite its importance, one of the key problems in locating information about community detection is the diverse spread of related articles across various disciplines. To the best of our knowledge, there is no current comprehensive review of recent literature which uses a scientometric analysis using complex networks analysis covering all relevant articles from the Web of Science (WoS). Here we present a visual survey of key literature using CiteSpace. The idea is to identify emerging trends besides using network techniques to examine the evolution of the domain. Towards that end, we identify the most influential, central, as well as active nodes using scientometric analyses. We examine authors, key articles, cited references, core subject categories, key journals, institutions, as well as countries. The exploration of the scientometric literature of the domain reveals that Yong Wang is a pivot node with the highest centrality. Additionally, we have observed that Mark Newman is the most highly cited author in the network. We have also identified that the journal, \"Reviews of Modern Physics\" has the strongest citation burst. In terms of cited documents, an article by Andrea Lancichinetti has the highest centrality score. We have also discovered that the origin of the key publications in this domain is from the United States. Whereas Scotland has the strongest and longest citation burst. Additionally, we have found that the categories of \"Computer Science\" and \"Engineering\" lead other categories based on frequency and centrality respectively.", "title": "" }, { "docid": "a68cec6fd069499099c8bca264eb0982", "text": "The anti-saccade task has emerged as an important task for investigating the flexible control that we have over behaviour. In this task, participants must suppress the reflexive urge to look at a visual target that appears suddenly in the peripheral visual field and must instead look away from the target in the opposite direction. A crucial step involved in performing this task is the top-down inhibition of a reflexive, automatic saccade. Here, we describe recent neurophysiological evidence demonstrating the presence of this inhibitory function in single-cell activity in the frontal eye fields and superior colliculus. Patients diagnosed with various neurological and/or psychiatric disorders that affect the frontal lobes or basal ganglia find it difficult to suppress the automatic pro-saccade, revealing a deficit in top-down inhibition.", "title": "" }, { "docid": "04d06629a3683536fb94228f6295a7d3", "text": "User profiling is an important step for solving the problem of personalized news recommendation. Traditional user profiling techniques often construct profiles of users based on static historical data accessed by users. However, due to the frequent updating of news repository, it is possible that a user’s finegrained reading preference would evolve over time while his/her long-term interest remains stable. Therefore, it is imperative to reason on such preference evaluation for user profiling in news recommenders. Besides, in content-based news recommenders, a user’s preference tends to be stable due to the mechanism of selecting similar content-wise news articles with respect to the user’s profile. To activate users’ reading motivations, a successful recommender needs to introduce ‘‘somewhat novel’’ articles to", "title": "" } ]
scidocsrr
34fa03cd360074fc2aad44f2c25f4576
Annotation of Entities and Relations in Spanish Radiology Reports
[ { "docid": "9aae377bf3ebb202b13fab2cbd85f1ce", "text": "The paper describes a rule-based information extraction (IE) system developed for Polish medical texts. We present two applications designed to select data from medical documentation in Polish: mammography reports and hospital records of diabetic patients. First, we have designed a special ontology that subsequently had its concepts translated into two separate models, represented as typed feature structure (TFS) hierarchies, complying with the format required by the IE platform we adopted. Then, we used dedicated IE grammars to process documents and fill in templates provided by the models. In particular, in the grammars, we addressed such linguistic issues as: ambiguous keywords, negation, coordination or anaphoric expressions. Resolving some of these problems has been deferred to a post-processing phase where the extracted information is further grouped and structured into more complex templates. To this end, we defined special heuristic algorithms on the basis of sample data. The evaluation of the implemented procedures shows their usability for clinical data extraction tasks. For most of the evaluated templates, precision and recall well above 80% were obtained.", "title": "" }, { "docid": "804920bbd9ee11cc35e93a53b58e7e79", "text": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.", "title": "" } ]
[ { "docid": "c76d53333ae2443720178819bf23a3ea", "text": "Deng and Xu [2003] proposed a system of multiple recursive generators of prime modulus <i>p</i> and order <i>k</i>, where all nonzero coefficients of the recurrence are equal. This type of generator is efficient because only a single multiplication is required. It is common to choose <i>p</i> = 2<sup>31</sup>−1 and some multipliers to further improve the speed of the generator. In this case, some fast implementations are available without using explicit division or multiplication. For such a <i>p</i>, Deng and Xu [2003] provided specific parameters, yielding the maximum period for recurrence of order <i>k</i>, up to 120. One problem of extending it to a larger <i>k</i> is the difficulty of finding a complete factorization of <i>p</i><sup><i>k</i></sup>−1. In this article, we apply an efficient technique to find <i>k</i> such that it is easy to factor <i>p</i><sup><i>k</i></sup>−1, with <i>p</i> = 2<sup>31</sup>−1. The largest one found is <i>k</i> = 1597. To find multiple recursive generators of large order <i>k</i>, we introduce an efficient search algorithm with an early exit strategy in case of a failed search. For <i>k</i> = 1597, we constructed several efficient and portable generators with the period length approximately 10<sup>14903.1</sup>.", "title": "" }, { "docid": "9948ebbd2253021e3af53534619c5094", "text": "This paper presents a novel method to simultaneously estimate the clothed and naked 3D shapes of a person. The method needs only a single photograph of a person wearing clothing. Firstly, we learn a deformable model of human clothed body shapes from a database. Then, given an input image, the deformable model is initialized with a few user-specified 2D joints and contours of the person. And the correspondence between 3D shape and 2D contours is established automatically. Finally, we optimize the parameters of the deformable model in an iterative way, and then obtain the clothed and naked 3D shapes of the person simultaneously. The experimental results on real images demonstrate the effectiveness of our method.", "title": "" }, { "docid": "5387c752db7b4335a125df91372099b3", "text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.", "title": "" }, { "docid": "0724e800d88d1d7cd1576729f975b09a", "text": "Neural networks are investigated for predicting the magnitude of the largest seismic event in the following month based on the analysis of eight mathematically computed parameters known as seismicity indicators. The indicators are selected based on the Gutenberg-Richter and characteristic earthquake magnitude distribution and also on the conclusions drawn by recent earthquake prediction studies. Since there is no known established mathematical or even empirical relationship between these indicators and the location and magnitude of a succeeding earthquake in a particular time window, the problem is modeled using three different neural networks: a feed-forward Levenberg-Marquardt backpropagation (LMBP) neural network, a recurrent neural network, and a radial basis function (RBF) neural network. Prediction accuracies of the models are evaluated using four different statistical measures: the probability of detection, the false alarm ratio, the frequency bias, and the true skill score or R score. The models are trained and tested using data for two seismically different regions: Southern California and the San Francisco bay region. Overall the recurrent neural network model yields the best prediction accuracies compared with LMBP and RBF networks. While at the present earthquake prediction cannot be made with a high degree of certainty this research provides a scientific approach for evaluating the short-term seismic hazard potential of a region.", "title": "" }, { "docid": "f8b56265c69727f55cc5debfc6958e41", "text": "Ground control of unmanned aerial vehicles (UAV) is a key to the advancement of this technology for commercial purposes. The need for reliable ground control arises in scenarios where human intervention is necessary, e.g. handover situations when autonomous systems fail. Manual flights are also needed for collecting diverse datasets to train deep neural network-based control systems. This axiom is even more prominent for the case of unmanned flying robots where there is no simple solution to capture optimal navigation footage. In such scenarios, improving the ground control and developing better autonomous systems are two sides of the same coin. To improve the ground control experience, and thus the quality of the footage, we propose to upgrade onboard teleoperation systems to a fully immersive setup that provides operators with a stereoscopic first person view (FPV) through a virtual reality (VR) head-mounted display. We tested users (n = 7) by asking them to fly our drone on the field. Test flights showed that operators flying our system can take off, fly, and land successfully while wearing VR headsets. In addition, we ran two experiments with prerecorded videos of the flights and walks to a wider set of participants (n = 69 and n = 20) to compare the proposed technology to the experience provided by current drone FPV solutions that only include monoscopic vision. Our immersive stereoscopic setup enables higher accuracy depth perception, which has clear implications for achieving better teleoperation and unmanned navigation. Our studies show comprehensive data on the impact of motion and simulator sickness in case of stereoscopic setup. We present the device specifications as well as the measures that improve teleoperation experience and reduce induced simulator sickness. Our approach provides higher perception fidelity during flights, which leads to a more precise better teleoperation and ultimately translates into better flight data for training deep UAV control policies.", "title": "" }, { "docid": "60465268d2ede9a7d8b374ac05df0d46", "text": "Nobody likes performance reviews. Subordinates are terrified they'll hear nothing but criticism. Bosses think their direct reports will respond to even the mildest criticism with anger or tears. The result? Everyone keeps quiet. That's unfortunate, because most people need help figuring out how to improve their performance and advance their careers. This fear of feedback doesn't come into play just during annual reviews. At least half the executives with whom the authors have worked never ask for feedback. Many expect the worst: heated arguments, even threats of dismissal. So rather than seek feedback, people try to guess what their bosses are thinking. Fears and assumptions about feedback often manifest themselves in psychologically maladaptive behaviors such as procrastination, denial, brooding, jealousy, and self-sabotage. But there's hope, say the authors. Those who learn adaptive techniques can free themselves from destructive responses. They'll be able to deal with feedback better if they acknowledge negative emotions, reframe fear and criticism constructively, develop realistic goals, create support systems, and reward themselves for achievements along the way. Once you've begun to alter your maladaptive behaviors, you can begin seeking regular feedback from your boss. The authors take you through four steps for doing just that: self-assessment, external assessment, absorbing the feedback, and taking action toward change. Organizations profit when employees ask for feedback and deal well with criticism. Once people begin to know how they are doing relative to management's priorities, their work becomes better aligned with organizational goals. What's more, they begin to transform a feedback-averse environment into a more honest and open one, in turn improving performance throughout the organization.", "title": "" }, { "docid": "a5776d4da32a93c69b18c696c717e634", "text": "Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.", "title": "" }, { "docid": "92dbb257f6d087ce61f5c560c34bf46f", "text": "This study investigates eCommerce adoption in family run SMEs (small and medium sized enterprises). Specifically, the objectives of the study are twofold: (a) to examine environmental and organisational determinants of eCommerce adoption in the family business context; (b) to explore the moderating effect of business strategic orientation on the relationships between adoption determinants and adoption decision. A quantitative questionnaire survey was executed. The sampling frame was outlined based on the OneSource database and 88 companies were involved. Results of logistic regression analyses proffer support that ‘external pressure’ and ‘perceived benefits’ are predictors of eCommerce adoption. Moreover, the findings indicate that the strategic orientation of family businesses will function as a moderator in the adoption process. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d", "text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.", "title": "" }, { "docid": "ede1f31a32e59d29ee08c64c1a6ed5f7", "text": "There are different approaches to the problem of assigning each word of a text with a parts-of-speech tag, which is known as Part-Of-Speech (POS) tagging. In this paper we compare the performance of a few POS tagging techniques for Bangla language, e.g. statistical approach (n-gram, HMM) and transformation based approach (Brill’s tagger). A supervised POS tagging approach requires a large amount of annotated training corpus to tag properly. At this initial stage of POS-tagging for Bangla, we have very limited resource of annotated corpus. We tried to see which technique maximizes the performance with this limited resource. We also checked the performance for English and tried to conclude how these techniques might perform if we can manage a substantial amount of annotated corpus.", "title": "" }, { "docid": "cae269a1eee20846aa2ea83cbf1d0ecc", "text": "Metformin has utility in cancer prevention and treatment, though the mechanisms for these effects remain elusive. Through genetic screening in C. elegans, we uncover two metformin response elements: the nuclear pore complex (NPC) and acyl-CoA dehydrogenase family member-10 (ACAD10). We demonstrate that biguanides inhibit growth by inhibiting mitochondrial respiratory capacity, which restrains transit of the RagA-RagC GTPase heterodimer through the NPC. Nuclear exclusion renders RagC incapable of gaining the GDP-bound state necessary to stimulate mTORC1. Biguanide-induced inactivation of mTORC1 subsequently inhibits growth through transcriptional induction of ACAD10. This ancient metformin response pathway is conserved from worms to humans. Both restricted nuclear pore transit and upregulation of ACAD10 are required for biguanides to reduce viability in melanoma and pancreatic cancer cells, and to extend C. elegans lifespan. This pathway provides a unified mechanism by which metformin kills cancer cells and extends lifespan, and illuminates potential cancer targets. PAPERCLIP.", "title": "" }, { "docid": "fb915584f23482986e672b1a38993ca1", "text": "We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms—including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered.", "title": "" }, { "docid": "9e292d43355dbdbcf6360c88e49ba38b", "text": "This paper proposes stacked dual-patch CP antenna for GPS and SDMB services. The characteristic of CP at dual-frequency bands is achieved with a circular patch truncated corners with ears at diagonal direction. According to the dimensions of the truncated corners as well as spacing between centers of the two via-holes, the axial ratio of the CP antenna can be controlled. The good return loss results were obtained both at GPS and SDMB bands. The measured gains of the antenna system are 2.3 dBi and 2.4 dBi in GPS and SDMB bands, respectively. The measured axial ratio is slightly shifted frequencies due to diameter variation of via-holes and the spacing between lower patch and upper patch. The proposed low profile, low-cost fabrication, dual circularly polarization, and separated excitation ports make the proposed stacked antenna an applicable solution as a multi-functional antenna for GPS and SDMB operation on vehicle.", "title": "" }, { "docid": "60c976cb53d5128039e752e5f797f110", "text": "This essay presents and discusses the developing role of virtual and augmented reality technologies in education. Addressing the challenges in adapting such technologies to focus on improving students’ learning outcomes, the author discusses the inclusion of experiential modes as a vehicle for improving students’ knowledge acquisition. Stakeholders in the educational role of technology include students, faculty members, institutions, and manufacturers. While the benefits of such technologies are still under investigation, the technology landscape offers opportunities to enhance face-to-face and online teaching, including contributions in the understanding of abstract concepts and training in real environments and situations. Barriers to technology use involve limited adoption of augmented and virtual reality technologies, and, more directly, necessary training of teachers in using such technologies within meaningful educational contexts. The author proposes a six-step methodology to aid adoption of these technologies as basic elements within the regular education: training teachers; developing conceptual prototypes; teamwork involving the teacher, a technical programmer, and an educational architect; and producing the experience, which then provides results in the subsequent two phases wherein teachers are trained to apply augmentedand virtual-reality solutions within their teaching methodology using an available subject-specific experience and then finally implementing the use of the experience in a regular subject with students. The essay concludes with discussion of the business opportunities facing virtual reality in face-to-face education as well as augmented and virtual reality in online education.", "title": "" }, { "docid": "86ad395a553495de5f297a2b5fde3f0e", "text": "⇒ NOT written, but spoken language. [Intuitions come from written.] ⇒ NOT meaning as thing, but use of linguistic forms for communicative functions o Direct att. in shared conceptual space like gestures (but w/conventions) ⇒ NOT grammatical rules, but patterns of use => schemas o Constructions themselves as complex symbols \"She sneezed him the ball\" o NOT 'a grammar' but a structured inventory of constructions: continuum of regularity => idiomaticity grammaticality = normativity • Many complexities = \"unification\" of constructions w/ incompatibilities o NOT innate UG, but \"teeming modularity\" (1) symbols, pred-arg structure,", "title": "" }, { "docid": "53a55976808757ceb4b5533af578aad9", "text": "Vehicular Ad-Hoc Networks (VANETs) will play an important role in Smart Cities and will support the development of not only safety applications, but also car smart video surveillance services. Recent improvements in multimedia over VANETs allow drivers, passengers, and rescue teams to capture, share, and access on-road multimedia services. Vehicles can cooperate with each other to transmit live flows of traffic accidents or disasters and provide drivers, passengers, and rescue teams rich visual information about a monitored area. Since humans will watch the videos, their distribution must be done by considering the provided Quality of Experience (QoE) even in multi-hop, multi-path, and dynamic environments. This article introduces an application framework to handle this kind of services and a routing protocol, the DBD (Distributed Beaconless Dissemination), that enhances the dissemination of live video flows on multimedia highway VANETs. DBD uses a backbone-based approach to create and maintain persistent and high quality routes during the video delivery in opportunistic Vehicle to Vehicle (V2V) scenarios. It also improves the performance of the IEEE 802.11p MAC layer, by solving the Spurious Forwarding (SF) problem, while increasing the packet delivery ratio and reducing the forwarding delay. Performance evaluation results show the benefits of DBD compared to existing works in forwarding videos over VANETs, where main objective and subjective QoE results are measured. Safety and video surveillance car applications are key Information and Communication Technologies (ICT) services for smart city scenarios and have been attracting an important attention from governments, car manufacturers, academia, and society [1]. Nowadays, the distribution of real-time multimedia content over Vehicular Ad-Hoc Networks (VANETs) is becoming a reality and allowing drivers/passengers to have new experiences with on-road videos in a smart city [2,3]. According to Cisco, video traffic will represent over 90% of the global IP data in a few years, where thousands of users will produce, share, and consume multimedia services ubiquitously, including in their vehicles. Multimedia VANETs are well-suited for capturing and sharing environmental monitoring, surveillance, traffic accidents, and disaster-based video smart city applications. Live streaming video flows provide users and authorities (e.g., first responder teams and paramedics) with more precise information than simple text messages and allow them to determine a suitable action, while reducing human reaction times [4]. Vehicles can cooperate with each other to disseminate short videos of dangerous situations to visually inform drivers and rescue teams about them both in the city and on a highway. …", "title": "" }, { "docid": "b4e6c50275eef350da454f088ba7e02c", "text": "Children with language-based learning impairments (LLIs) have major deficits in their recognition of some rapidly successive phonetic elements and nonspeech sound stimuli. In the current study, LLI children were engaged in adaptive training exercises mounted as computer \"games\" designed to drive improvements in their \"temporal processing\" skills. With 8 to 16 hours of training during a 20-day period, LLI children improved markedly in their abilities to recognize brief and fast sequences of nonspeech and speech stimuli.", "title": "" }, { "docid": "2c4fed71ee9d658516b017a924ad6589", "text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.", "title": "" }, { "docid": "ad7a5bccf168ac3b13e13ccf12a94f7d", "text": "As one of the most popular social media platforms today, Twitter provides people with an effective way to communicate and interact with each other. Through these interactions, influence among users gradually emerges and changes people's opinions. Although previous work has studied interpersonal influence as the probability of activating others during information diffusion, they ignore an important fact that information diffusion is the result of influence, while dynamic interactions among users produce influence. In this article, the authors propose a novel temporal influence model to learn users' opinion behaviors regarding a specific topic by exploring how influence emerges during communications. The experiments show that their model performs better than other influence models with different influence assumptions when predicting users' future opinions, especially for the users with high opinion diversity.", "title": "" }, { "docid": "dc6119045a87d7cea34db49554549926", "text": "Multi-tenancy is a relatively new software architecture principle in the realm of the Software as a Service (SaaS) business model. It allows to make full use of the economy of scale, as multiple customers – “tenants” – share the same application and database instance. All the while, the tenants enjoy a highly configurable application, making it appear that the application is deployed on a dedicated server. The major benefits of multi-tenancy are increased utilization of hardware resources and improved ease of maintenance, resulting in lower overall application costs, making the technology attractive for service providers targeting small and medium enterprises (SME). In our paper, we identify some of the core challenges of implementing multi-tenancy. Furthermore, we present a conceptual reengineering approach to support the migration of single-tenant applications into multi-tenant applications.", "title": "" } ]
scidocsrr
d5cbb9266b3655f79e4675c9e5cf0da0
Prism adaptation and aftereffect: specifying the properties of a procedural memory system.
[ { "docid": "ae54996b12f39802f31173b43cda91f9", "text": "The topic of multiple forms of memory is considered from a biological point of view. Fact-and-event (declarative, explicit) memory is contrasted with a collection of non conscious (non-declarative, implicit) memory abilities including skills and habits, priming, and simple conditioning. Recent evidence is reviewed indicating that declarative and non declarative forms of memory have different operating characteristics and depend on separate brain systems. A brain-systems framework for understanding memory phenomena is developed in light of lesion studies involving rats, monkeys, and humans, as well as recent studies with normal humans using the divided visual field technique, event-related potentials, and positron emission tomography (PET).", "title": "" } ]
[ { "docid": "c09e5f5592caab9a076d92b4f40df760", "text": "Producing a comprehensive overview of the chemical content of biologically-derived material is a major challenge. Apart from ensuring adequate metabolome coverage and issues of instrument dynamic range, mass resolution and sensitivity, there are major technical difficulties associated with data pre-processing and signal identification when attempting large scale, high-throughput experimentation. To address these factors direct infusion or flow infusion electrospray mass spectrometry has been finding utility as a high throughput metabolite fingerprinting tool. With little sample pre-treatment, no chromatography and instrument cycle times of less than 5 min it is feasible to analyse more than 1,000 samples per week. Data pre-processing is limited to aligning extracted mass spectra and mass-intensity matrices are generally ready in a working day for a month’s worth of data mining and hypothesis generation. ESI-MS fingerprinting has remained rather qualitative by nature and as such ion suppression does not generally compromise data information content as originally suggested when the methodology was first introduced. This review will describe how the quality of data has improved through use of nano-flow infusion and mass-windowing approaches, particularly when using high resolution instruments. The increasingly wider availability of robust high accurate mass instruments actually promotes ESI-MS from a merely fingerprinting tool to the ranks of metabolite profiling and combined with MS/MS capabilities of hybrid instruments improved structural information is available concurrently. We summarise current applications in a wide range of fields where ESI-MS fingerprinting has proved to be an excellent tool for “first pass” metabolome analysis of complex biological samples. The final part of the review describes a typical workflow with reference to recently published data to emphasise key aspects of overall experimental design.", "title": "" }, { "docid": "9b8d4b855bab5e2fdcadd1fe1632f197", "text": "Men report more permissive sexual attitudes and behavior than do women. This experiment tested whether these differences might result from false accommodation to gender norms (distorted reporting consistent with gender stereotypes). Participants completed questionnaires under three conditions. Sex differences in self-reported sexual behavior were negligible in a bogus pipeline condition in which participants believed lying could be detected, moderate in an anonymous condition, and greatest in an exposure threat condition in which the experimenter could potentially view participants responses. This pattern was clearest for behaviors considered less acceptable for women than men (e.g., masturbation, exposure to hardcore & softcore erotica). Results suggest that some sex differences in self-reported sexual behavior reflect responses influenced by normative expectations for men and women.", "title": "" }, { "docid": "b6f4bd15f7407b56477eb2cfc4c72801", "text": "In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction.", "title": "" }, { "docid": "b93ee4889d7f7dcfa04ef0132bc36b60", "text": "In the past decade, social and information networks have become prevalent, and research on the network data has attracted much attention. Besides the link structure, network data are often equipped with the content information (i.e, node attributes) that is usually noisy and characterized by high dimensionality. As the curse of dimensionality could hamper the performance of many machine learning tasks on networks (e.g., community detection and link prediction), feature selection can be a useful technique for alleviating such issue. In this paper, we investigate the problem of unsupervised feature selection on networks. Most existing feature selection methods fail to incorporate the linkage information, and the state-of-the-art approaches usually rely on pseudo labels generated from clustering. Such cluster labels may be far from accurate and can mislead the feature selection process. To address these issues, we propose a generative point of view for unsupervised features selection on networks that can seamlessly exploit the linkage and content information in a more effective manner. We assume that the link structures and node content are generated from a succinct set of high-quality features, and we find these features through maximizing the likelihood of the generation process. Experimental results on three real-world datasets show that our approach can select more discriminative features than state-of-the-art methods.", "title": "" }, { "docid": "ef92f3f230a7eedee7555b5fc35f5558", "text": "Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation (r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis.", "title": "" }, { "docid": "a66b5b6dea68e5460b227af4caa14ef3", "text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.", "title": "" }, { "docid": "f0365424e98ebcc0cb06ce51f65cbe7c", "text": "The most important milestone in the field of magnetic sensors was that AMR sensors started to replace Hall sensors in many application, were larger sensitivity is an advantage. GMR and SDT sensor finally found limited applications. We also review the development in miniaturization of fluxgate sensors and briefly mention SQUIDs, resonant sensors, GMIs and magnetomechanical sensors.", "title": "" }, { "docid": "73c4bded5834e75adb9820a8e0fed13d", "text": "We present a comprehensive evaluation of a large number of semi-supervised anomaly detection techniques for time series data. Some of these are existing techniques and some are adaptations that have never been tried before. For example, we adapt the window based discord detection technique to solve this problem. We also investigate several techniques that detect anomalies in discrete sequences, by discretizing the time series data. We evaluate these techniques on a large variety of data sets obtained from a broad spectrum of application domains. The data sets have different characteristics in terms of the nature of normal time series and the nature of anomalous time series. We evaluate the techniques on different metrics, such as accuracy in detecting the anomalous time series, sensitivity to parameters, and computational complexity, and provide useful insights regarding the effectiveness of different techniques based on the experimental evaluation.", "title": "" }, { "docid": "bbfe7693d45e3343b30fad7f6c9279d8", "text": "Vernier permanent magnet (VPM) machines can be utilized for direct drive applications by virtue of their high torque density and high efficiency. The purpose of this paper is to develop a general design guideline for split-slot low-speed VPM machines, generalize the operation principle, and illustrate the relationship among the numbers of the stator slots, coil poles, permanent magnet (PM) pole pairs, thereby laying a solid foundation for the design of various kinds of VPM machines. Depending on the PM locations, three newly designed VPM machines are reported in this paper and they are referred to as 1) rotor-PM Vernier machine, 2) stator-tooth-PM Vernier machine, and 3) stator-yoke-PM Vernier machine. The back-electromotive force (back-EMF) waveforms, static torque, and air-gap field distribution are predicted using time-stepping finite element method (TS-FEM). The performances of the proposed VPM machines are compared and reported.", "title": "" }, { "docid": "16f1b038f51e614da06ba84ebd175e14", "text": "This paper explores how to extract argumentation-relevant information automatically from a corpus of legal decision documents, and how to build new arguments using that information. For decision texts, we use the Vaccine/Injury Project (V/IP) Corpus, which contains default-logic annotations of argument structure. We supplement this with presuppositional annotations about entities, events, and relations that play important roles in argumentation, and about the level of confidence that arguments would be successful. We then propose how to integrate these semantic-pragmatic annotations with syntactic and domain-general semantic annotations, such as those generated in the DeepQA architecture, and outline how to apply machine learning and scoring techniques similar to those used in the IBM Watson system for playing the Jeopardy! question-answer game. We replace this game-playing goal, however, with the goal of learning to construct legal arguments.", "title": "" }, { "docid": "8eac34d73a2bcb4fa98793499d193067", "text": "We review here the recent success in quantum annealing, i.e., optimization of the cost or energy functions of complex systems utilizing quantum fluctuations. The concept is introduced in successive steps through the studies of mapping of such computationally hard problems to the classical spin glass problems. The quantum spin glass problems arise with the introduction of quantum fluctuations, and the annealing behavior of the systems as these fluctuations are reduced slowly to zero. This provides a general framework for realizing analog quantum computation.", "title": "" }, { "docid": "fa8c3873cf03af8d4950a0e53f877b08", "text": "The problem of formal likelihood-based (either classical or Bayesian) inference for discretely observed multi-dimensional diffusions is particularly challenging. In principle this involves data-augmentation of the observation data to give representations of the entire diffusion trajectory. Most currently proposed methodology splits broadly into two classes: either through the discretisation of idealised approaches for the continuous-time diffusion setup; or through the use of standard finite-dimensional methodologies discretisation of the diffusion model. The connections between these approaches have not been well-studied. This paper will provide a unified framework bringing together these approaches, demonstrating connections, and in some cases surprising differences. As a result, we provide, for the first time, theoretical justification for the various methods of imputing missing data. The inference problems are particularly challenging for reducible diffusions, and our framework is correspondingly more complex in that case. Therefore we treat the reducible and irreducible cases differently within the paper. Supplementary materials for the article are avilable on line. 1 Overview of likelihood-based inference for diffusions Diffusion processes have gained much popularity as statistical models for observed and latent processes. Among others, their appeal lies in their flexibility to deal with nonlinearity, time-inhomogeneity and heteroscedasticity by specifying two interpretable functionals, their amenability to efficient computations due to their Markov property, and the rich existing mathematical theory about their properties. As a result, they are used as models throughout Science; some book references related with this approach to modeling include Section 5.3 of [1] for physical systems, Section 8.3.3 (in conjunction with Section 6.3) of [12] for systems biology and mass action stochastic kinetics, and Chapter 10 of [27] for interest rates. A mathematically precise specification of a d-dimensional diffusion process V is as the solution of a stochastic differential equation (SDE) of the type: dVs = b(s, Vs; θ1) ds+ σ(s, Vs; θ2) dBs, s ∈ [0, T ] ; (1) where B is an m-dimensional standard Brownian motion, b(·, · ; · ) : R+ ×Rd ×Θ1 → R is the drift and σ(·, · ; · ) : R+ × R × Θ2 → R is the diffusion coefficient. These ICREA and Department of Economics, Universitat Pompeu Fabra, omiros.papaspiliopoulos@upf.edu Department of Statistics, University of Warwick Department of Statistics and Actuarial Science, University of Iowa, Iowa City, Iowa", "title": "" }, { "docid": "462256d2d428f8c77269e4593518d675", "text": "This paper is devoted to the modeling of real textured images by functional minimization and partial differential equations. Following the ideas of Yves Meyer in a total variation minimization framework of L. Rudin, S. Osher, and E. Fatemi, we decompose a given (possible textured) image f into a sum of two functions u+v, where u ¥ BV is a function of bounded variation (a cartoon or sketchy approximation of f), while v is a function representing the texture or noise. To model v we use the space of oscillating functions introduced by Yves Meyer, which is in some sense the dual of the BV space. The new algorithm is very simple, making use of differential equations and is easily solved in practice. Finally, we implement the method by finite differences, and we present various numerical results on real textured images, showing the obtained decomposition u+v, but we also show how the method can be used for texture discrimination and texture segmentation.", "title": "" }, { "docid": "21130eded44790720e79a750ecdf3847", "text": "Enabled by Web 2.0 technologies social media provide an unparalleled platform for consumers to share their product experiences and opinions---through word-of-mouth (WOM) or consumer reviews. It has become increasingly important to understand how WOM content and metrics thereof are related to consumer purchases and product sales. By integrating network analysis with text sentiment mining techniques, we propose product comparison networks as a novel construct, computed from consumer product reviews. To test the validity of these product ranking measures, we conduct an empirical study based on a digital camera dataset from Amazon.com. The results demonstrate significant linkage between network-based measures and product sales, which is not fully captured by existing review measures such as numerical ratings. The findings provide important insights into the business impact of social media and user-generated content, an emerging problem in business intelligence research. From a managerial perspective, our results suggest that WOM in social media also constitutes a competitive landscape for firms to understand and manipulate.", "title": "" }, { "docid": "33cab0ec47af5e40d64e34f8ffc7dd6f", "text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.", "title": "" }, { "docid": "75639f4119e862382732b1ee597a9bd3", "text": "People enjoy food photography because they appreciate food. Behind each meal there is a story described in a complex recipe and, unfortunately, by simply looking at a food image we do not have access to its preparation process. Therefore, in this paper we introduce an inverse cooking system that recreates cooking recipes given food images. Our system predicts ingredients as sets by means of a novel architecture, modeling their dependencies without imposing any order, and then generates cooking instructions by attending to both image and its inferred ingredients simultaneously. We extensively evaluate the whole system on the large-scale Recipe1M dataset and show that (1) we improve performance w.r.t. previous baselines for ingredient prediction; (2) we are able to obtain high quality recipes by leveraging both image and ingredients; (3) our system is able to produce more compelling recipes than retrieval-based approaches according to human judgment.", "title": "" }, { "docid": "8cfc2b5947a130d72486748b1d086e7e", "text": "The Legal Knowledge Interchange Format (LKIF), being developed in the European ESTRELLA project, defines a knowledge representation language for arguments, rules, ontologies, and cases in XML. In this article, the syntax and argumentation-theoretic semantics of the LKIF rule language is presented and illustrated with an example based on German family law. This example is then applied to show how LKIF rules can be used with the Carneades argumentation system to construct, evaluate and visualize arguments about a legal case.", "title": "" }, { "docid": "181eafc11f3af016ca0926672bdb5a9d", "text": "The conventional wisdom is that backprop nets with excess hi dden units generalize poorly. We show that nets with excess capacity ge neralize well when trained with backprop and early stopping. Experim nts suggest two reasons for this: 1) Overfitting can vary significant ly i different regions of the model. Excess capacity allows better fit to reg ions of high non-linearity, and backprop often avoids overfitting the re gions of low non-linearity. 2) Regardless of size, nets learn task subco mponents in similar sequence. Big nets pass through stages similar to th ose learned by smaller nets. Early stopping can stop training the large n et when it generalizes comparably to a smaller net. We also show that co njugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linea rity.", "title": "" }, { "docid": "6524efda795834105bae7d65caf15c53", "text": "PURPOSE\nThis paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.\n\n\nMETHOD\nOur qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.\n\n\nRESULTS\nAt the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.\n\n\nCONCLUSION\nParticipants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.", "title": "" }, { "docid": "a08d783229b59342cdb015e051450f94", "text": "We consider the problem of estimating the remaining useful life (RUL) of a system or a machine from sensor data. Many approaches for RUL estimation based on sensor data make assumptions about how machines degrade. Additionally, sensor data from machines is noisy and o‰en su‚ers from missing values in many practical seŠings. We propose Embed-RUL: a novel approach for RUL estimation from sensor data that does not rely on any degradation-trend assumptions, is robust to noise, and handles missing values. EmbedRUL utilizes a sequence-to-sequence model based on Recurrent Neural Networks (RNNs) to generate embeddings for multivariate time series subsequences. Œe embeddings for normal and degraded machines tend to be di‚erent, and are therefore found to be useful for RUL estimation. We show that the embeddings capture the overall paŠern in the time series while €ltering out the noise, so that the embeddings of two machines with similar operational behavior are close to each other, even when their sensor readings have signi€cant and varying levels of noise content. We perform experiments on publicly available turbofan engine dataset and a proprietary real-world dataset, and demonstrate that Embed-RUL outperforms the previously reported [24] state-of-the-art on several metrics.", "title": "" } ]
scidocsrr
a3041d0fadc6fba5a081fd6f04a804bf
Jump to better conclusions: SCAN both left and right
[ { "docid": "346349308d49ac2d3bb1cfa5cc1b429c", "text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "title": "" } ]
[ { "docid": "d63a81df4117f2b615f6e7208a2bdb6b", "text": "Recently, Location-based Services (LBS) became proactive by supporting smart notifications in case the user enters or leaves a specific geographical area, well-known as Geofencing. However, different geofences cannot be temporally related to each other. Therefore, we introduce a novel method to formalize sophisticated Geofencing scenarios as state and transition-based geofence models. Such a model considers temporal relations between geofences as well as duration constraints for the time being within a geofence or in transition between geofences. These are two highly important aspects in order to cover sophisticated scenarios in which a notification should be triggered only in case the user crosses multiple geofences in a defined temporal order or leaves a geofence after a certain amount of time. As a proof of concept, we introduce a prototype of a suitable user interface for designing complex geofence models in conjunction with the corresponding proactive LBS.", "title": "" }, { "docid": "3508a963a4f99d02d9c41dab6801d8fd", "text": "The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students’ comprehension and learning. This comprehensive meta-analysis of empirical studies was conducted to examine evidence of the effects of classroom discussion on measures of teacher and student talk and on individual student comprehension and critical-thinking and reasoning outcomes. Results revealed that several discussion approaches produced strong increases in the amount of student talk and concomitant reductions in teacher talk, as well as substantial improvements in text comprehension. Few approaches to discussion were effective at increasing students’ literal or inferential comprehension and critical thinking and reasoning. Effects were moderated by study design, the nature of the outcome measure, and student academic ability. While the range of ages of participants in the reviewed studies was large, a majority of studies were conducted with students in 4th through 6th grades. Implications for research and practice are discussed.", "title": "" }, { "docid": "6c1a21055e21198c2102f2601b835104", "text": "Stroke is a leading cause of adult motor disability. Despite recent progress, recovery of motor function after stroke is usually incomplete. This double blind, Sham-controlled, crossover study was designed to test the hypothesis that non-invasive stimulation of the motor cortex could improve motor function in the paretic hand of patients with chronic stroke. Hand function was measured using the Jebsen-Taylor Hand Function Test (JTT), a widely used, well validated test for functional motor assessment that reflects activities of daily living. JTT measured in the paretic hand improved significantly with non-invasive transcranial direct current stimulation (tDCS), but not with Sham, an effect that outlasted the stimulation period, was present in every single patient tested and that correlated with an increment in motor cortical excitability within the affected hemisphere, expressed as increased recruitment curves (RC) and reduced short-interval intracortical inhibition. These results document a beneficial effect of non-invasive cortical stimulation on a set of hand functions that mimic activities of daily living in the paretic hand of patients with chronic stroke, and suggest that this interventional strategy in combination with customary rehabilitative treatments may play an adjuvant role in neurorehabilitation.", "title": "" }, { "docid": "fab33f2e32f4113c87e956e31674be58", "text": "We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, uniqueand synergistic contributions. We focus on the relationship be tween “redundant information” and the more familiar information theoretic notions of “common information.” Our main contri bution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decompositi on of the total mutual information. Interestingly, this entai ls that any reasonable measure of redundant information cannot be deri ved by optimization over a single random variable. Keywords—common and private information, synergy, redundancy, information lattice, sufficient statistic, partial information decomposition", "title": "" }, { "docid": "842202ed67b71c91630fcb63c4445e38", "text": "Yaumatei Dermatology Clinic, 12/F Yaumatei Specialist Clinic (New Extension), 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 46-year-old Chinese man presented with one year history of itchy verrucous lesions over penis and scrotum. Skin biopsy confirmed epidermolytic acanthoma. Epidermolytic acanthoma is a rare benign tumour. Before making such a diagnosis, exclusion of other diseases, especially genital warts and bowenoid papulosis is necessary. Treatment of multiple epidermolytic acanthoma remains unsatisfactory.", "title": "" }, { "docid": "052a83669b39822eda51f2e7222074b4", "text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.", "title": "" }, { "docid": "ea9f43aaab4383369680c85a040cedcf", "text": "Efforts toward automated detection and identification of multistep cyber attack scenarios would benefit significantly from a methodology and language for modeling such scenarios. The Correlated Attack Modeling Language (CAML) uses a modular approach, where a module represents an inference step and modules can be linked together to detect multistep scenarios. CAML is accompanied by a library of predicates, which functions as a vocabulary to describe the properties of system states and events. The concept of attack patterns is introduced to facilitate reuse of generic modules in the attack modeling process. CAML is used in a prototype implementation of a scenario recognition engine that consumes first-level security alerts in real time and produces reports that identify multistep attack scenarios discovered in the alert stream.", "title": "" }, { "docid": "dfb16d97d293776e255397f1dc49bbbf", "text": "Self-service automatic teller machines (ATMs) have dramatically altered the ways in which customers interact with banks. ATMs provide the convenience of completing some banking transactions remotely and at any time. AT&T Global Information Solutions (GIS) is the world's leading provider of ATMs. These machines support such familiar services as cash withdrawals and balance inquiries. Further technological development has extended the utility and convenience of ATMs produced by GIS by facilitating check cashing and depositing, as well as direct bill payment, using an on-line system. These enhanced services, discussed in this paper, are made possible primarily through sophisticated optical character recognition (OCR) technology. Developed by an AT&T team that included GIS, AT&T Bell Laboratories Quality, Engineering, Software, and Technologies (QUEST), and AT&T Bell Laboratories Research, OCR technology was crucial to the development of these advanced ATMs.", "title": "" }, { "docid": "3bb4666a27f6bc961aa820d3f9301560", "text": "The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.", "title": "" }, { "docid": "30e93cb20194b989b26a8689f06b8343", "text": "We present a robust method for solving the map matching problem exploiting massive GPS trace data. Map matching is the problem of determining the path of a user on a map from a sequence of GPS positions of that user --- what we call a trajectory. Commonly obtained from GPS devices, such trajectory data is often sparse and noisy. As a result, the accuracy of map matching is limited due to ambiguities in the possible routes consistent with trajectory samples. Our approach is based on the observation that many regularity patterns exist among common trajectories of human beings or vehicles as they normally move around. Among all possible connected k-segments on the road network (i.e., consecutive edges along the network whose total length is approximately k units), a typical trajectory collection only utilizes a small fraction. This motivates our data-driven map matching method, which optimizes the projected paths of the input trajectories so that the number of the k-segments being used is minimized. We present a formulation that admits efficient computation via alternating optimization. Furthermore, we have created a benchmark for evaluating the performance of our algorithm and others alike. Experimental results demonstrate that the proposed approach is superior to state-of-art single trajectory map matching techniques. Moreover, we also show that the extracted popular k-segments can be used to process trajectories that are not present in the original trajectory set. This leads to a map matching algorithm that is as efficient as existing single trajectory map matching algorithms, but with much improved map matching accuracy.", "title": "" }, { "docid": "76a99c83dfbe966839dd0bcfbd32fad6", "text": "Virtually all domains of cognitive function require the integration of distributed neural activity. Network analysis of human brain connectivity has consistently identified sets of regions that are critically important for enabling efficient neuronal signaling and communication. The central embedding of these candidate 'brain hubs' in anatomical networks supports their diverse functional roles across a broad range of cognitive tasks and widespread dynamic coupling within and across functional networks. The high level of centrality of brain hubs also renders them points of vulnerability that are susceptible to disconnection and dysfunction in brain disorders. Combining data from numerous empirical and computational studies, network approaches strongly suggest that brain hubs play important roles in information integration underpinning numerous aspects of complex cognitive function.", "title": "" }, { "docid": "1e8acf321f7ff3a1a496e4820364e2a8", "text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.", "title": "" }, { "docid": "c896c4c81a3b8d18ad9f8073562f5514", "text": "A fully integrated passive UHF RFID tag with embedded temperature sensor, compatible with the ISO/IEC 18000 type 6C protocol, is developed in a standard 0.18µm CMOS process, which is designed to measure the axle temperature of a running train. The consumption of RF/analog front-end circuits is 1.556µA@1.0V, and power dissipation of digital part is 5µA@1.0V. The CMOS temperature sensor exhibits a conversion time under 2 ms, less than 7 µW power dissipation, resolution of 0.31°C/LSB and error of +2.3/−1.1°C with a 1.8 V power supply for range from −35°C to 105°C. Measured sensitivity of tag is −5dBm at room temperature.", "title": "" }, { "docid": "8c04758d9f1c44e007abf6d2727d4a4f", "text": "The automatic identification and diagnosis of rice diseases are highly desired in the field of agricultural information. Deep learning is a hot research topic in pattern recognition and machine learning at present, it can effectively solve these problems in vegetable pathology. In this study, we propose a novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques. Using a dataset of 500 natural images of diseased and healthy rice leaves and stems captured from rice experimental field, CNNs are trained to identify 10 common rice diseases. Under the 10-fold cross-validation strategy, the proposed CNNs-based model achieves an accuracy of 95.48%. This accuracy is much higher than conventional machine learning model. The simulation results for the identification of rice diseases show the feasibility and effectiveness of the proposed method. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "23c1bd79e91f2e07b883c5cdbd97a780", "text": "BACKGROUND\nPostprandial hypertriglyceridemia and hyperglycemia are considered risk factors for cardiovascular disease. Evidence suggests that postprandial hypertriglyceridemia and hyperglycemia induce endothelial dysfunction and inflammation through oxidative stress. Statins and angiotensin type 1 receptor blockers have been shown to reduce oxidative stress and inflammation, improving endothelial function.\n\n\nMETHODS AND RESULTS\nTwenty type 2 diabetic patients ate 3 different test meals: a high-fat meal, 75 g glucose alone, and a high-fat meal plus glucose. Glycemia, triglyceridemia, endothelial function, nitrotyrosine, C-reactive protein, intercellular adhesion molecule-1, and interleukin-6 were assayed during the tests. Subsequently, diabetics took atorvastatin 40 mg/d, irbesartan 300 mg/d, both, or placebo for 1 week. The 3 tests were performed again between 5 and 7 days after the start of each treatment. High-fat load and glucose alone produced a decrease in endothelial function and increases in nitrotyrosine, C-reactive protein, intercellular adhesion molecule-1, and interleukin-6. These effects were more pronounced when high-fat load and glucose were combined. Short-term atorvastatin and irbesartan treatments significantly counterbalanced these phenomena, and their combination was more effective than either therapy alone.\n\n\nCONCLUSIONS\nThis study confirms an independent and cumulative effect of postprandial hypertriglyceridemia and hyperglycemia on endothelial function and inflammation, suggesting oxidative stress as a common mediator of such an effect. Short-term treatment with atorvastatin and irbesartan may counterbalance this phenomenon; the combination of the 2 compounds is most effective.", "title": "" }, { "docid": "2e9a0bce883548288de0a5d380b1ddf6", "text": "Three-level neutral point clamped (NPC) inverter is a widely used topology of multilevel inverters. However, the neutral point fluctuates for certain switching states. At low modulation index, the fluctuations can be compensated using redundant switching states. But, at higher modulation index and in overmodulation region, the neutral point fluctuation deteriorates the performance of the inverter. This paper proposes a simple space vector pulsewidth modulation scheme for operating a three-level NPC inverter at higher modulation indexes, including overmodulation region, with neutral point balancing. Experimental results are provided", "title": "" }, { "docid": "4fc64e24e9b080ffcc45cae168c2e339", "text": "During real time control of a dynamic system, one needs to design control systems with advanced control strategies to handle inherent nonlinearities and disturbances. This paper deals with the designing of a model reference adaptive control system with the use of MIT rule for real time control of a ball and beam system. This paper uses the gradient theory to develop MIT rule in which one or more parameters of adaptive controller needs to be adjusted so that the plant could track the reference model. A linearized model of ball and beam system is used in this paper to design the controller on MATLAB and the designed controller is then applied for real time control of ball and beam system. Simulations carried on SIMULINK and MATLAB show good performance of the designed adaptive controller in real time.", "title": "" }, { "docid": "25e7e22d19d786ff953c8cfa47988aa2", "text": "The world of human-object interactions is rich. While generally we sit on chairs and sofas, if need be we can even sit on TVs or top of shelves. In recent years, there has been progress in modeling actions and human-object interactions. However, most of these approaches require lots of data. It is not clear if the learned representations of actions are generalizable to new categories. In this paper, we explore the problem of zero-shot learning of human-object interactions. Given limited verb-noun interactions in training data, we want to learn a model than can work even on unseen combinations. To deal with this problem, In this paper, we propose a novel method using external knowledge graph and graph convolutional networks which learns how to compose classifiers for verbnoun pairs. We also provide benchmarks on several dataset for zero-shot learning including both image and video. We hope our method, dataset and baselines will facilitate future research in this direction.", "title": "" }, { "docid": "e6633bf0c5f2fd18f739a7f3a1751854", "text": "Image inpainting in wavelet domain refers to the recovery of an image from incomplete and/or inaccurate wavelet coefficients. To reconstruct the image, total variation (TV) models have been widely used in the literature and they produce high-quality reconstructed images. In this paper, we consider an unconstrained TV-regularized, l2-data-fitting model to recover the image. The model is solved by the alternating direction method (ADM). At each iteration, ADM needs to solve three subproblems, all of which have closed-form solutions. The per-iteration computational cost of ADM is dominated by two Fourier transforms and two wavelet transforms, all of which admit fast computation. Convergence of the ADM iterative scheme is readily obtained. We also discuss extensions of this ADM scheme to solving two closely related constrained models. We present numerical results to show the efficiency and stability of ADM for solving wavelet domain image inpainting problems. Numerical comparison results of ADM with some recent algorithms are also reported.", "title": "" }, { "docid": "2910fe6ac9958d9cbf9014c5d3140030", "text": "We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.", "title": "" } ]
scidocsrr
32c8889bf4dae5b6fa371c0b6e172252
The Making of a 3D-Printed, Cable-Driven, Single-Model, Lightweight Humanoid Robotic Hand
[ { "docid": "d2c0bccf1ff6fd4ac9d76defe1632a85", "text": "Children with hand reductions, whether congenital or traumatic, have unique prosthetic needs. They present a challenge because of their continually changing size due to physical growth as well as changing needs due to psychosocial development. Conventional prosthetics are becoming more technologically advanced and increasingly complex. Although these are welcome advances for adults, the concomitant increases in weight, moving parts, and cost are not beneficial for children. Pediatric prosthetic needs may be better met with simpler solutions. Three-dimensional printing can be used to fabricate rugged, light-weight, easily replaceable, and very low cost assistive hands for children.", "title": "" }, { "docid": "3309e09d16e74f87a507181bd82cd7f0", "text": "The goal of this work is to overview and summarize the grasping taxonomies reported in the literature. Our long term goal is to understand how to reduce mechanical complexity of anthropomorphic hands and still preserve their dexterity. On the basis of a literature survey, 33 different grasp types are taken into account. They were then arranged in a hierarchical manner, resulting in 17 grasp types.", "title": "" } ]
[ { "docid": "563a630a752416668664246e8eb937b6", "text": "The Linux Driver Verification system is designed for static analysis of the source code of Linux kernel space device drivers. In this paper, we describe the architecture of the verification system, including the integration of third-party tools for static verification of C programs. We consider characteristics of the Linux drivers source code that are important from the viewpoint of verification algorithms and give examples of comparative analysis of different verification tools, as well as different versions and configurations of a given tool.", "title": "" }, { "docid": "bc17b54461a134809911ebfa2a57e560", "text": "We use data with complete information on both rejected and accepted bank loan applicants to estimate the value of sample bias correction using Heckman’s two-stage model with partial observability. In the credit scoring domain such correction is called reject inference. We validate the model performances with and without the correction of sample bias by various measurements. Results show that it is prohibitively costly not to control for sample selection bias due to the accept/reject decision. However, we also find that the Heckman procedure is unable to appropriately control for the selection bias. † Data contained in this study were produced on site at the Carnegie-Mellon Census Research Data Center. Research results and conclusions are those of the authors and do not necessarily indicate concurrence by the Bureau of the Census or the Carnegie-Mellon Census Research Data Center. Åstebro acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada and the Social Sciences and Humanities Research Council of Canada’s joint program in Management of Technological Change as well as support from the Canadian Imperial Bank of Commerce.", "title": "" }, { "docid": "2801a5a26d532fc33543744ea89743f1", "text": "Microalgae have received much interest as a biofuel feedstock in response to the uprising energy crisis, climate change and depletion of natural sources. Development of microalgal biofuels from microalgae does not satisfy the economic feasibility of overwhelming capital investments and operations. Hence, high-value co-products have been produced through the extraction of a fraction of algae to improve the economics of a microalgae biorefinery. Examples of these high-value products are pigments, proteins, lipids, carbohydrates, vitamins and anti-oxidants, with applications in cosmetics, nutritional and pharmaceuticals industries. To promote the sustainability of this process, an innovative microalgae biorefinery structure is implemented through the production of multiple products in the form of high value products and biofuel. This review presents the current challenges in the extraction of high value products from microalgae and its integration in the biorefinery. The economic potential assessment of microalgae biorefinery was evaluated to highlight the feasibility of the process.", "title": "" }, { "docid": "c6160b8ad36bc4f297bfb1f6b04c79e0", "text": "Despite their incentive structure flaws, mining pools account for more than 95% of Bitcoin’s computation power. This paper introduces an attack against mining pools in which a malicious party pays pool members to withhold their solutions from their pool operator. We show that an adversary with a tiny amount of computing power and capital can execute this attack. Smart contracts enforce the malicious party’s payments, and therefore miners need neither trust the attacker’s intentions nor his ability to pay. Assuming pool members are rational, an adversary with a single mining ASIC can, in theory, destroy all big mining pools without losing any money (and even make some profit).", "title": "" }, { "docid": "ea6392b6a49ed40cb5e3779e0d1f3ea2", "text": "We see the world in scenes, where visual objects occur in rich surroundings, often embedded in a typical context with other related objects. How does the human brain analyse and use these common associations? This article reviews the knowledge that is available, proposes specific mechanisms for the contextual facilitation of object recognition, and highlights important open questions. Although much has already been revealed about the cognitive and cortical mechanisms that subserve recognition of individual objects, surprisingly little is known about the neural underpinnings of contextual analysis and scene perception. Building on previous findings, we now have the means to address the question of how the brain integrates individual elements to construct the visual experience.", "title": "" }, { "docid": "1326be667e3ec3aa6bf0732ef97c230a", "text": "Recognizing human activities in a sequence is a challenging area of research in ubiquitous computing. Most approaches use a fixed size sliding window over consecutive samples to extract features— either handcrafted or learned features—and predict a single label for all samples in the window. Two key problems emanate from this approach: i) the samples in one window may not always share the same label. Consequently, using one label for all samples within a window inevitably lead to loss of information; ii) the testing phase is constrained by the window size selected during training while the best window size is difficult to tune in practice. We propose an efficient algorithm that can predict the label of each sample, which we call dense labeling, in a sequence of human activities of arbitrary length using a fully convolutional network. In particular, our approach overcomes the problems posed by the sliding window step. Additionally, our algorithm learns both the features and classifier automatically. We release a new daily activity dataset based on a wearable sensor with hospitalized patients. We conduct extensive experiments and demonstrate that our proposed approach is able to outperform the state-of-the-arts in terms of classification and label misalignment measures on three challenging datasets: Opportunity, Hand Gesture, and our new dataset.", "title": "" }, { "docid": "aeba4012971d339a9a953a7b86f57eb8", "text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.", "title": "" }, { "docid": "459b07b78f3cbdcbd673881fd000da14", "text": "The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as “goats” due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for “goats,” since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of “goats” during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches.", "title": "" }, { "docid": "7eec93450eb625bee264f37f1520603f", "text": "User Experience is a key differentiator in the era of Digital Disruption. Chatbots are increasingly considered as part of a delighting User Experience. But only a bright Conversation Modelling effort, which is part of holistic Enterprise Modelling, can make a Chatbot effective. In addition, best practices can be applied to achieve or even outperform user expectations. Thanks to Cognitive Systems and associated modelling tools, effective Chatbot dialogs can be developed in an agile manner, while respecting the enterprise", "title": "" }, { "docid": "34523c9ccd5d8c0bec2a84173205be99", "text": "Deep learning has achieved astonishing results onmany taskswith large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.", "title": "" }, { "docid": "d68147bf8637543adf3053689de740c3", "text": "In this paper, we do a research on the keyword extraction method of news articles. We build a candidate keywords graph model based on the basic idea of TextRank, use Word2Vec to calculate the similarity between words as transition probability of nodes' weight, calculate the word score by iterative method and pick the top N of the candidate keywords as the final results. Experimental results show that the weighted TextRank algorithm with correlation of words can improve performance of keyword extraction generally.", "title": "" }, { "docid": "41261cf72d8ee3bca4b05978b07c1c4f", "text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.", "title": "" }, { "docid": "96aa1f19a00226af7b5bbe0bb080582e", "text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.", "title": "" }, { "docid": "a85803f14639bef7f4539bad631d088c", "text": "5.", "title": "" }, { "docid": "71757d1cee002bb235a591cf0d5aafd5", "text": "There is an old Wall Street adage goes, ‘‘It takes volume to make price move”. The contemporaneous relation between trading volume and stock returns has been studied since stock markets were first opened. Recent researchers such as Wang and Chin [Wang, C. Y., & Chin S. T. (2004). Profitability of return and volume-based investment strategies in China’s stock market. Pacific-Basin Finace Journal, 12, 541–564], Hodgson et al. [Hodgson, A., Masih, A. M. M., & Masih, R. (2006). Futures trading volume as a determinant of prices in different momentum phases. International Review of Financial Analysis, 15, 68–85], and Ting [Ting, J. J. L. (2003). Causalities of the Taiwan stock market. Physica A, 324, 285–295] have found the correlation between stock volume and price in stock markets. To verify this saying, in this paper, we propose a dual-factor modified fuzzy time-series model, which take stock index and trading volume as forecasting factors to predict stock index. In empirical analysis, we employ the TAIEX (Taiwan stock exchange capitalization weighted stock index) and NASDAQ (National Association of Securities Dealers Automated Quotations) as experimental datasets and two multiplefactor models, Chen’s [Chen, S. M. (2000). Temperature prediction using fuzzy time-series. IEEE Transactions on Cybernetics, 30 (2), 263–275] and Huarng and Yu’s [Huarng, K. H., & Yu, H. K. (2005). A type 2 fuzzy time-series model for stock index forecasting. Physica A, 353, 445–462], as comparison models. The experimental results indicate that the proposed model outperforms the listing models and the employed factors, stock index and the volume technical indicator, VR(t), are effective in stock index forecasting. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a21f04b6c8af0b38b3b41f79f2661fa6", "text": "While Enterprise Architecture Management is an established and widely discussed field of interest in the context of information systems research, we identify a lack of work regarding quality assessment of enterprise architecture models in general and frameworks or methods on that account in particular. By analyzing related work by dint of a literature review in a design science research setting, we provide twofold contributions. We (i) suggest an Enterprise Architecture Model Quality Framework (EAQF) and (ii) apply it to a real world scenario. Keywords—Enterprise Architecture, model quality, quality framework, EA modeling.", "title": "" }, { "docid": "ea5697d417fe154be77d941c19d8a86e", "text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.", "title": "" }, { "docid": "f16fd498b692875c3bd95460feaf06ec", "text": "Raman and Fourier Transform Infrared (FT-IR) spectroscopy was used for assessment of structural differences of celluloses of various origins. Investigated celluloses were: bacterial celluloses cultured in presence of pectin and/or xyloglucan, as well as commercial celluloses and cellulose extracted from apple parenchyma. FT-IR spectra were used to estimate of the I(β) content, whereas Raman spectra were used to evaluate the degree of crystallinity of the cellulose. The crystallinity index (X(C)(RAMAN)%) varied from -25% for apple cellulose to 53% for microcrystalline commercial cellulose. Considering bacterial cellulose, addition of xyloglucan has an impact on the percentage content of cellulose I(β). However, addition of only xyloglucan or only pectins to pure bacterial cellulose both resulted in a slight decrease of crystallinity. However, culturing bacterial cellulose in the presence of mixtures of xyloglucan and pectins results in an increase of crystallinity. The results confirmed that the higher degree of crystallinity, the broader the peak around 913 cm(-1). Among all bacterial celluloses the bacterial cellulose cultured in presence of xyloglucan and pectin (BCPX) has the most similar structure to those observed in natural primary cell walls.", "title": "" }, { "docid": "041ca42d50e4cac92cf81c989a8527fb", "text": "Helix antenna consists of a single conductor or multi-conductor open helix-shaped. Helix antenna has a three-dimensional shape. The shape of the helix antenna resembles a spring and the diameter and the distance between the windings of a certain size. This study aimed to design a signal amplifier wifi on 2.4 GHz. Materials used in the form of the pipe, copper wire, various connectors and wireless adapters and various other components. Mmmanagal describing simulation result on helix antenna. Further tested with wirelesmon software to test the wifi signal strength. The results are based Mmanagal, radiation patterns emitted achieve Ganin: 4.5 dBi horizontal polarization, F / B: −0,41dB; rear azimuth 1200 elevation 600, 2400 MHz, R27.9 and jX impedance −430.9, Elev: 64.40 real GND: 0.50 m height, and wifi signal strength increased from 47% to 55%.", "title": "" }, { "docid": "86095b1b9900abb5a16cc7bfef8e1c39", "text": "We consider the problem of estimating the spatial layout of an indoor scene from a monocular RGB image, modeled as the projection of a 3D cuboid. Existing solutions to this problem often rely strongly on hand-engineered features and vanishing point detection, which are prone to failure in the presence of clutter. In this paper, we present a method that uses a fully convolutional neural network (FCNN) in conjunction with a novel optimization framework for generating layout estimates. We demonstrate that our method is robust in the presence of clutter and handles a wide range of highly challenging scenes. We evaluate our method on two standard benchmarks and show that it achieves state of the art results, outperforming previous methods by a wide margin.", "title": "" } ]
scidocsrr
039e3e5e4c9a46130c751fc12b95a679
On Loss Functions for Deep Neural Networks in Classification
[ { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" } ]
[ { "docid": "ada35607fa56214e5df8928008735353", "text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.", "title": "" }, { "docid": "11e2ec2aab62ba8380e82a18d3fcb3d8", "text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.", "title": "" }, { "docid": "cdcfd25cd84870b51297ec776c8fa447", "text": "This paper aims at the construction of a music composition system that generates 16-bars musical works by interaction between human and the system, using interactive genetic algorithm. The present system generates not only various kinds of melody parts but also various kinds of patterns of backing parts and tones of all parts, so that users can acquire satisfied musical work. The users choose generating mode of musical work from three points, i.e., melody part, tones of all parts, or patterns of backing parts, and the users evaluate impressions of presented candidates of musical work through the user interface. The present system generates the candidates based on user's subjective evaluation. This paper shows evaluation experiments to confirm the usefulness of the present system.", "title": "" }, { "docid": "9229c3eae864cf924226ffb483617220", "text": "Great effort has been put into the development of diagnosis methods for the most dangerous type of skin diseases Melanoma. This paper aims to develop a prototype capable of segment and classify skin lesions in dermoscopy images based on ABCD rule. The proposed work is divided into four distinct stages: 1) Pre-processing, consists of filtering and contrast enhancing techniques. 2) Segmentation, thresholding and statistical properties are computed to localize the lesion. 3) Features extraction, Asymmetry is calculated by averaging the calculated results of the two methods: Entropy and Bi-fold. Border irregularity is calculated by accumulate the statistical scores of the eight segments of the segmented lesion. Color feature is calculated among the existence of six candidate colors: white, black, red, light-brown, dark-brown, and blue-gray. Diameter is measured by the conversion operation from the total number of pixels in the greatest diameter into millimeter (mm). 4) Classification, the summation of the four extracted feature scores multiplied by their weights to yield a total dermoscopy score (TDS); hence, the lesion is classified into benign, suspicious, or malignant. The prototype is implemented in MATLAB and the dataset used consists of 200 dermoscopic images from Hospital Pedro Hispano, Matosinhos. The achieved results shows an acceptable performance rates, an accuracy 90%, sensitivity 85%, and specificity 92.22%.", "title": "" }, { "docid": "d3b2283ce3815576a084f98c34f37358", "text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.", "title": "" }, { "docid": "7bdebaf86fd679ae00520dc8f7ee3afa", "text": "Studies show that attractive women demonstrate stronger preferences for masculine men than relatively unattractive women do. Such condition-dependent preferences may occur because attractive women can more easily offset the costs associated with choosing a masculine partner, such as lack of commitment and less interest in parenting. Alternatively, if masculine men display negative characteristics less to attractive women than to unattractive women, attractive women may perceive masculine men to have more positive personality traits than relatively unattractive women do. We examined how two indices of women’s attractiveness, body mass index (BMI) and waist–hip ratio (WHR), relate to perceptions of both the attractiveness and trustworthiness of masculinized versus feminized male faces. Consistent with previous studies, women with a low (attractive) WHR had stronger preferences for masculine male faces than did women with a relatively high (unattractive) WHR. This relationship remained significant when controlling for possible effects of BMI. Neither WHR nor BMI predicted perceptions of trustworthiness. These findings present converging evidence for condition-dependent mate preferences in women and suggest that such preferences do not reflect individual differences in the extent to which pro-social traits are ascribed to feminine versus masculine men. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b4ed15850674851fb7e479b7181751d7", "text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.", "title": "" }, { "docid": "34ab20699d12ad6cca34f67cee198cd9", "text": "Such as relational databases, most graphs databases are OLTP databases (online transaction processing) of generic use and can be used to produce a wide range of solutions. That said, they shine particularly when the solution depends, first, on our understanding of how things are connected. This is more common than one may think. And in many cases it is not only how things are connected but often one wants to know something about the different relationships in our field their names, qualities, weight and so on. Briefly, connectivity is the key. The graphs are the best abstraction one has to model and query the connectivity; databases graphs in turn give developers and the data specialists the ability to apply this abstraction to their specific problems. For this purpose, in this paper one used this approach to simulate the route planner application, capable of querying connected data. Merely having keys and values is not enough; no more having data partially connected through joins semantically poor. We need both the connectivity and contextual richness to operate these solutions. The case study herein simulates a railway network railway stations connected with one another where each connection between two stations may have some properties. And one answers the question: how to find the optimized route (path) and know whether a station is reachable from one station or not and in which depth.", "title": "" }, { "docid": "df48f9d3096d8528e9f517783a044df8", "text": "We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a new attentional technique and a label-to-label connection for sequence learning, akin to Hidden Markov Models. Our experiments show that both of these innovations enable our model to outperform strong baselines for dialogue-act classification on the MapTask and Switchboard corpora. In addition, we analyse empirically the effectiveness of each of these innovations.", "title": "" }, { "docid": "83b79fc95e90a303f29a44ef8730a93f", "text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.", "title": "" }, { "docid": "be5b9ba8398732d0e5a55fd918097f36", "text": "There has been a significant amount of research in Artificial Intelligence focusing on the representation of legislation and regulations. The motivation for this has been twofold: on the one hand there have been opportunities for developing advisory systems for legal practitioners; on the other hand the law is a complex domain in which diverse modes of reasoning are employed, offering ample opportunity to test existing Artificial Intelligence techniques as well as to develop new ones. The general aim of the thesis is to explore the potential for developing logic-based tools for the analysis and representation of legal contracts, by considering the following two questions: (a) To what extent can techniques developed for the representation of legislation and regulations be transferred and applied usefully in the domain of legal contracts? (b) What features are specific to legal contracts and what techniques can be developed to address them? The intended applications include both the drafting of new contracts and the management and administration of existing ones, that is to say, the general problem of storing and retrieving information from large contractual documents, and more specific tasks such as monitoring compliance or establishing parties’ duties/rights under a given agreement when it is in force. Experimental material is drawn mostly from engineering contracts, which are typically large and complex and contain a multitude of interrelated provisions. The term ‘contract’ is commonly used to refer both to a legally binding agreement between (usually) two parties and to the document, that records such an agreement. The first part of the thesis is concerned with documents and the representation of contracts at the macro-level: the emphasis is on issues relevant to the design of structurally coherent documents. The thesis presents a document assembly tool designed to be applicable, where contract drafting is based on model-form contracts or existing examples of a given type. The main features of the approach are: (i) the representation addresses the structure and interrelationships between the constituent parts of contracts but not the text of the document itself; (ii) the representation of documents is separated from the mechanisms that manipulate it; and (iii) the drafting process is subject to a collection of explicitly represented constraints that govern the structure of documents. The second part of the thesis deals with the contents of agreements and representations at the micro-level. Micro-level drafting is the source of a host of issues ranging from representing the detailed wording of individual sections, to representing the nature of provisions (obligations, powers, reparations, procedures), to representing their \"fitness\" or effectiveness in securing some party's best interests. Various techniques are available to assist in aspects of this task, such as disambiguating contractual provisions and in detecting inconsistency or incompleteness. The second part of the thesis comprises three discussions. The first is on contractual obligations as the result of promissory exchanges between parties and draws upon work by Kimbrough and his associates. The second concentrates on contractual obligations and common patterns encountered in contracts. The third is concerned with temporal verification of contracts and shows how the techniques employed in model checking for hardware specification can be transferred to the domain of contracts.", "title": "" }, { "docid": "e00295dc86476d1d350d11068439fe87", "text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.", "title": "" }, { "docid": "350c7855cf36fcde407a84f8b66f33d8", "text": "This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and $F_{0}$ features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.", "title": "" }, { "docid": "9cf59b5f67d07787da8eeae825066525", "text": "Event correlation has become the cornerstone of many reactive applications, particularly in distributed systems. However, support for programming with complex events is still rather specific and rudimentary. This paper presents EventJava, an extension of Java with generic support for event-based distributed programming. EventJava seamlessly integrates events with methods, and broadcasting with unicasting of events; it supports reactions to combinations of events, and predicates guarding those reactions. EventJava is implemented as a framework to allow for customization of event semantics, matching, and dispatching. We present its implementation, based on a compiler transforming specific primitives to Java, along with a reference implementation of the framework. We discuss ordering properties of EventJava through a formalization of its core as an extension of Featherweight Java. In a performance evaluation, we show that EventJava compares favorably to a highly tuned database-backed event correlation engine as well as to a comparably lightweight concurrency mechanism.", "title": "" }, { "docid": "268a7147cc4ae486bf4b9184787b9492", "text": "Autonomous vehicles will need to decide on a course of action when presented with multiple less-than-ideal outcomes.", "title": "" }, { "docid": "e8c97daac0301310074698273d813772", "text": "Deep learning-based robotic grasping has made significant progress thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge. In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects. Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution over possible grasps. This model allows us to sample grasps efficiently at test time (or avoid sampling entirely). We evaluate our model architecture and data generation pipeline in simulation and the real world. We find we can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. We also demonstrate an 80% success rate on real-world grasp attempts despite having only been trained on random simulated objects.", "title": "" }, { "docid": "339f0935708aa6c5f8be704a1e8004e5", "text": "Evolution sculpts both the body plans and nervous systems of agents together over time. By contrast, in artificial intelligence and robotics, a robot's body plan is usually designed by hand, and control policies are then optimized for that fixed design. The task of simultaneously co-optimizing the morphology and controller of an embodied robot has remained a challenge. In psychology, the theory of embodied cognition posits that behaviour arises from a close coupling between body plan and sensorimotor control, which suggests why co-optimizing these two subsystems is so difficult: most evolutionary changes to morphology tend to adversely impact sensorimotor control, leading to an overall decrease in behavioural performance. Here, we further examine this hypothesis and demonstrate a technique for 'morphological innovation protection', which temporarily reduces selection pressure on recently morphologically changed individuals, thus enabling evolution some time to 'readapt' to the new morphology with subsequent control policy mutations. We show the potential for this method to avoid local optima and converge to similar highly fit morphologies across widely varying initial conditions, while sustaining fitness improvements further into optimization. While this technique is admittedly only the first of many steps that must be taken to achieve scalable optimization of embodied machines, we hope that theoretical insight into the cause of evolutionary stagnation in current methods will help to enable the automation of robot design and behavioural training-while simultaneously providing a test bed to investigate the theory of embodied cognition.", "title": "" }, { "docid": "de96ac151e5a3a2b38f2fa309862faee", "text": "Venue recommendation is an important application for Location-Based Social Networks (LBSNs), such as Yelp, and has been extensively studied in recent years. Matrix Factorisation (MF) is a popular Collaborative Filtering (CF) technique that can suggest relevant venues to users based on an assumption that similar users are likely to visit similar venues. In recent years, deep neural networks have been successfully applied to tasks such as speech recognition, computer vision and natural language processing. Building upon this momentum, various approaches for recommendation have been proposed in the literature to enhance the effectiveness of MF-based approaches by exploiting neural network models such as: word embeddings to incorporate auxiliary information (e.g. textual content of comments); and Recurrent Neural Networks (RNN) to capture sequential properties of observed user-venue interactions. However, such approaches rely on the traditional inner product of the latent factors of users and venues to capture the concept of collaborative filtering, which may not be sufficient to capture the complex structure of user-venue interactions. In this paper, we propose a Deep Recurrent Collaborative Filtering framework (DRCF) with a pairwise ranking function that aims to capture user-venue interactions in a CF manner from sequences of observed feedback by leveraging Multi-Layer Perception and Recurrent Neural Network architectures. Our proposed framework consists of two components: namely Generalised Recurrent Matrix Factorisation (GRMF) and Multi-Level Recurrent Perceptron (MLRP) models. In particular, GRMF and MLRP learn to model complex structures of user-venue interactions using element-wise and dot products as well as the concatenation of latent factors. In addition, we propose a novel sequence-based negative sampling approach that accounts for the sequential properties of observed feedback and geographical location of venues to enhance the quality of venue suggestions, as well as alleviate the cold-start users problem. Experiments on three large checkin and rating datasets show the effectiveness of our proposed framework by outperforming various state-of-the-art approaches.", "title": "" }, { "docid": "33390e96d05644da201db3edb3ad7338", "text": "This paper addresses the difficult problem of finding an optimal neural architecture design for a given image classification task. We propose a method that aggregates two main results of the previous state-of-the-art in neural architecture search. These are, appealing to the strong sampling efficiency of a search scheme based on sequential modelbased optimization (SMBO) [15], and increasing training efficiency by sharing weights among sampled architectures [18]. Sequential search has previously demonstrated its capabilities to find state-of-the-art neural architectures for image classification. However, its computational cost remains high, even unreachable under modest computational settings. Affording SMBO with weight-sharing alleviates this problem. On the other hand, progressive search with SMBO is inherently greedy, as it leverages a learned surrogate function to predict the validation error of neural architectures. This prediction is directly used to rank the sampled neural architectures. We propose to attenuate the greediness of the original SMBO method by relaxing the role of the surrogate function so it predicts architecture sampling probability instead. We demonstrate with experiments on the CIFAR-10 dataset that our method, denominated Efficient progressive neural architecture search (EPNAS), leads to increased search efficiency, while retaining competitiveness of found architectures.", "title": "" }, { "docid": "1da9ea0ec4c33454ad9217bcf7118c1c", "text": "We use quantitative media (blogs, and news as a comparison) data generated by a large-scale natural language processing (NLP) text analysis system to perform a comprehensive and comparative study on how a company’s reported media frequency, sentiment polarity and subjectivity anticipates or reflects its stock trading volumes and financial returns. Our analysis provides concrete evidence that media data is highly informative, as previously suggested in the literature – but never studied on our scale of several large collections of blogs and news for over five years. Building on our findings, we give a sentiment-based market-neutral trading strategy which gives consistently favorable returns with low volatility over a five year period (2005-2009). Our results are significant in confirming the performance of general blog and news sentiment analysis methods over broad domains and sources. Moreover, several remarkable differences between news and blogs are also identified in this paper.", "title": "" } ]
scidocsrr
f8b704d75e1aa835ad61212e9214ccad
Embedding Deep Metric for Person Re-identication A Study Against Large Variations
[ { "docid": "d98186e7dde031b99330be009b600e43", "text": "This paper contributes a new high quality dataset for person re-identification, named \"Market-1501\". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.", "title": "" } ]
[ { "docid": "e7230519f0bd45b70c1cbd42f09cb9e8", "text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.", "title": "" }, { "docid": "f9b11e55be907175d969cd7e76803caf", "text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.", "title": "" }, { "docid": "c460ac78bb06e7b5381506f54200a328", "text": "Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.", "title": "" }, { "docid": "3891138c186fa72cdf8a19ef6be33638", "text": "In the past decade, internet of things (IoT) has been a focus of research. Security and privacy are the key issues for IoT applications, and still face some enormous challenges. In order to facilitate this emerging domain, we in brief review the research progress of IoT, and pay attention to the security. By means of deeply analyzing the security architecture and features, the security requirements are given. On the basis of these, we discuss the research status of key technologies including encryption mechanism, communication security, protecting sensor data and cryptographic algorithms, and briefly outline the challenges.", "title": "" }, { "docid": "caad330df7dd6feb957af45a5dcfc524", "text": "FPGA-based hardware accelerator for convolutional neural networks (CNNs) has obtained great attentions due to its higher energy efficiency than GPUs. However, it has been a challenge for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this paper, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipeline (temporal parallelism) stages. Experiment results show that the proposed architecture running at 90 MHz on a Xilinx Virtex-7 FPGA achieves a computing throughput of 7.663 TOPS with a power consumption of 8.2 W regardless of the batch size of input data. This is 8.3x faster and 75x more energy-efficient than a Titan X GPU for processing online individual requests (in small batch size). For processing static data (in large batch size), the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5x higher energy efficiency.", "title": "" }, { "docid": "e706c5071b87561f08ee8f9610e41e2e", "text": "Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been proposed to limit the information provided to the adversary by omitting probability scores, significantly impacting the utility of the provided service. In this work, we illustrate how a service provider can still provide useful, albeit misleading, class probability information, while significantly limiting the success of the attack. Our defense forces the adversary to discard the class probabilities, requiring significantly more queries before they can train a model with comparable performance. We evaluate several attack strategies, model architectures, and hyperparameters under varying adversarial models, and evaluate the efficacy of our defense against the strongest adversary. Finally, we quantify the amount of noise injected into the class probabilities to mesure the loss in utility, e.g., adding 1.26 nats per query on CIFAR-10 and 3.27 on MNIST. Our evaluation shows our defense can degrade the accuracy of the stolen model at least 20%, or require up to 64 times more queries while keeping the accuracy of the protected model almost intact.", "title": "" }, { "docid": "d034e1b08f704c7245a50bb383206001", "text": "Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for evolutionary optimization. Using the existing state of the art soft ordering architecture as the starting point, methods for evolving the modules of this architecture and for evolving the overall topology or routing between modules are evaluated in this paper. A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain. This result demonstrates how evolution can be instrumental in advancing deep neural network and complex system design in general.", "title": "" }, { "docid": "bb13ad5b41abbf80f7e7c70a9098cd15", "text": "OBJECTIVE\nThis study assessed the psychological distress in Spanish college women and analyzed it in relation to sociodemographic and academic factors.\n\n\nPARTICIPANTS AND METHODS\nThe authors selected a stratified random sampling of 1,043 college women (average age of 22.2 years). Sociodemographic and academic information were collected, and psychological distress was assessed with the Symptom Checklist-90-Revised.\n\n\nRESULTS\nThis sample of college women scored the highest on the depression dimension and the lowest on the phobic anxiety dimension. The sample scored higher than women of the general population on the dimensions of obsessive-compulsive, interpersonal sensitivity, paranoid ideation, psychoticism, and on the Global Severity Index. Scores in the sample significantly differed based on age, relationship status, financial independence, year of study, and area of study.\n\n\nCONCLUSION\nThe results indicated an elevated level of psychological distress among college women, and therefore college health services need to devote more attention to their mental health.", "title": "" }, { "docid": "9546f8a74577cc1119e48fae0921d3cf", "text": "Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data.", "title": "" }, { "docid": "86d8a61771cd14a825b6fc652f77d1d6", "text": "The widespread of adult content on online social networks (e.g., Twitter) is becoming an emerging yet critical problem. An automatic method to identify accounts spreading sexually explicit content (i.e., adult account) is of significant values in protecting children and improving user experiences. Traditional adult content detection techniques are ill-suited for detecting adult accounts on Twitter due to the diversity and dynamics in Twitter content. In this paper, we formulate the adult account detection as a graph based classification problem and demonstrate our detection method on Twitter by using social links between Twitter accounts and entities in tweets. As adult Twitter accounts are mostly connected with normal accounts and post many normal entities, which makes the graph full of noisy links, existing graph based classification techniques cannot work well on such a graph. To address this problem, we propose an iterative social based classifier (ISC), a novel graph based classification technique resistant to the noisy links. Evaluations using large-scale real-world Twitter data show that, by labeling a small number of popular Twitter accounts, ISC can achieve satisfactory performance in adult account detection, significantly outperforming existing techniques.", "title": "" }, { "docid": "08ab7142ae035c3594d3f3ae339d3e27", "text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.", "title": "" }, { "docid": "8f9e3bb85b4a2fcff3374fd700ac3261", "text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.", "title": "" }, { "docid": "ff2b53e0cecb849d1cbb503300f1ab9a", "text": "Receiving rapid, accurate and comprehensive knowledge about the conditions of damaged buildings after earthquake strike and other natural hazards is the basis of many related activities such as rescue, relief and reconstruction. Recently, commercial high-resolution satellite imagery such as IKONOS and QuickBird is becoming more powerful data resource for disaster management. In this paper, a method for automatic detection and classification of damaged buildings using integration of high-resolution satellite imageries and vector map is proposed. In this method, after extracting buildings position from vector map, they are located in the pre-event and post-event satellite images. By measuring and comparing different textural features for extracted buildings in both images, buildings conditions are evaluated through a Fuzzy Inference System. Overall classification accuracy of 74% and kappa coefficient of 0.63 were acquired. Results of the proposed method, indicates the capability of this method for automatic determination of damaged buildings from high-resolution satellite imageries.", "title": "" }, { "docid": "1324ee90acbdfe27a14a0d86d785341a", "text": "Though autonomous vehicles are currently operating in several places, many important questions within the field of autonomous vehicle research remain to be addressed satisfactorily. In this paper, we examine the role of communication between pedestrians and autonomous vehicles at unsignalized intersections. The nature of interaction between pedestrians and autonomous vehicles remains mostly in the realm of speculation currently. Of course, pedestrian’s reactions towards autonomous vehicles will gradually change over time owing to habituation, but it is clear that this topic requires urgent and ongoing study, not least of all because engineers require some working model for pedestrian-autonomous-vehicle communication. Our paper proposes a decision-theoretic model that expresses the interaction between a pedestrian and a vehicle. The model considers the interaction between a pedestrian and a vehicle as expressed an MDP, based on prior work conducted by psychologists examining similar experimental conditions. We describe this model and our simulation study of behavior it exhibits. The preliminary results on evaluating the behavior of the autonomous vehicle are promising and we believe it can help reduce the data needed to develop fuller models.", "title": "" }, { "docid": "8b5bf8cf3832ac9355ed5bef7922fb5c", "text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.", "title": "" }, { "docid": "687414897eabd32ebbbca6ae792d7148", "text": "When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.", "title": "" }, { "docid": "89105546031fd478f1a1f3dcb9e25cdf", "text": "Effective and accurate diagnosis of Alzheimer's disease (AD) or mild cognitive impairment (MCI) can be critical for early treatment and thus has attracted more and more attention nowadays. Since first introduced, machine learning methods have been gaining increasing popularity for AD related research. Among the various identified biomarkers, magnetic resonance imaging (MRI) are widely used for the prediction of AD or MCI. However, before a machine learning algorithm can be applied, image features need to be extracted to represent the MRI images. While good representations can be pivotal to the classification performance, almost all the previous studies typically rely on human labelling to find the regions of interest (ROI) which may be correlated to AD, such as hippocampus, amygdala, precuneus, etc. This procedure requires domain knowledge and is costly and tedious. Instead of relying on extraction of ROI features, it is more promising to remove manual ROI labelling from the pipeline and directly work on the raw MRI images. In other words, we can let the machine learning methods to figure out these informative and discriminative image structures for AD classification. In this work, we propose to learn deep convolutional image features using unsupervised and supervised learning. Deep learning has emerged as a powerful tool in the machine learning community and has been successfully applied to various tasks. We thus propose to exploit deep features of MRI images based on a pre-trained large convolutional neural network (CNN) for AD and MCI classification, which spares the effort of manual ROI annotation process.", "title": "" }, { "docid": "3228df5de3c7d4a4ae61da815afa2bba", "text": "Abstract: The proposed zero-current-switching switched-capacitor quasi-resonant DC–DC converter is a new type of bidirectional power flow control conversion scheme. It possesses the conventional features of resonant switched-capacitor converters: low weight, small volume, high efficiency, low EMI emission and current stress. A zero-current-switching switched-capacitor stepup/step-down bidirectional converter is presented that can improve the current stress problem during bidirectional power flow control processing. It can provide a high voltage conversion ratio using four power MOSFET main switches, a set of switched capacitors and a small resonant inductor. The converter operating principle of the proposed bidirectional power conversion scheme is described in detail with circuit model analysis. Simulation and experiment are carried out to verify the concept and performance of the proposed bidirectional DC–DC converter.", "title": "" }, { "docid": "c63dcdd615007dfddca77e7bdf52c0eb", "text": "Essential tremor (ET) is a common movement disorder but its pathogenesis remains poorly understood. This has limited the development of effective pharmacotherapy. The current therapeutic armamentaria for ET represent the product of careful clinical observation rather than targeted molecular modeling. Here we review their pharmacokinetics, metabolism, dosing, and adverse effect profiles and propose a treatment algorithm. We also discuss the concept of medically refractory tremor, as therapeutic trials should be limited unless invasive therapy is contraindicated or not desired by patients.", "title": "" }, { "docid": "56e406924a967700fba3fe554b9a8484", "text": "Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation.", "title": "" } ]
scidocsrr
282fd5e1ecc75d94544a575d6877d55f
Emotional responses to a romantic partner's imaginary rejection: the roles of attachment anxiety, covert narcissism, and self-evaluation.
[ { "docid": "c5beaa8be086776c769caedc30815aa8", "text": "Three studies were conducted to examine the correlates of adult attachment. In Study 1, an 18-item scale to measure adult attachment style dimensions was developed based on Kazan and Shaver's (1987) categorical measure. Factor analyses revealed three dimensions underlying this measure: the extent to which an individual is comfortable with closeness, feels he or she can depend on others, and is anxious or fearful about such things as being abandoned or unloved. Study 2 explored the relation between these attachment dimensions and working models of self and others. Attachment dimensions were found to be related to self-esteem, expressiveness, instrumentality, trust in others, beliefs about human nature, and styles of loving. Study 3 explored the role of attachment style dimensions in three aspects of ongoing dating relationships: partner matching on attachment dimensions; similarity between the attachment of one's partner and caregiving style of one's parents; and relationship quality, including communication, trust, and satisfaction. Evidence was obtained for partner matching and for similarity between one's partner and one's parents, particularly for one's opposite-sex parent. Dimensions of attachment style were strongly related to how each partner perceived the relationship, although the dimension of attachment that best predicted quality differed for men and women. For women, the extent to which their partner was comfortable with closeness was the best predictor of relationship quality, whereas the best predictor for men was the extent to which their partner was anxious about being abandoned or unloved.", "title": "" }, { "docid": "c700a8a3dc4aa81c475e84fc1bbf9516", "text": "A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect.", "title": "" } ]
[ { "docid": "a4ff1ce29fb5f2be87c1868dcf96bd29", "text": "Web ontologies provide shared concepts for describing domain entities and thus enable semantic interoperability between applications. To facilitate concept sharing and ontology reusing, we developed Falcons Concept Search, a novel keyword-based ontology search engine. In this paper, we illustrate how the proposed mode of interaction helps users quickly find ontologies that satisfy their needs and present several supportive techniques including a new method of constructing virtual documents of concepts for keyword search, a popularity-based scheme to rank concepts and ontologies, and a way to generate query-relevant structured snippets. We also report the results of a usability evaluation as well as user feedback.", "title": "" }, { "docid": "7076f898c65a0e93a94357b757f92fc8", "text": "Understanding how to control how the brain's functioning mediates mental experience and the brain's processing to alter cognition or disease are central projects of cognitive and neural science. The advent of real-time functional magnetic resonance imaging (rtfMRI) now makes it possible to observe the biology of one's own brain while thinking, feeling and acting. Recent evidence suggests that people can learn to control brain activation in localized regions, with corresponding changes in their mental operations, by observing information from their brain while inside an MRI scanner. For example, subjects can learn to deliberately control activation in brain regions involved in pain processing with corresponding changes in experienced pain. This may provide a novel, non-invasive means of observing and controlling brain function, potentially altering cognitive processes or disease.", "title": "" }, { "docid": "65446279fb385c7a1f25f7b5ab3b4c2a", "text": "Children with autism are frequently observed to experience difficulties in sensory processing. This study examined specific patterns of sensory processing in 54 children with autistic disorder and their association with adaptive behavior. Model-based cluster analysis revealed three distinct sensory processing subtypes in autism. These subtypes were differentiated by taste and smell sensitivity and movement-related sensory behavior. Further, sensory processing subtypes predicted communication competence and maladaptive behavior. The findings of this study lay the foundation for the generation of more specific hypotheses regarding the mechanisms of sensory processing dysfunction in autism, and support the continued use of sensory-based interventions in the remediation of communication and behavioral difficulties in autism.", "title": "" }, { "docid": "9efd74df34775bc4c7a08230e67e990b", "text": "OBJECTIVE\nFirearm violence is a significant public health problem in the United States, and alcohol is frequently involved. This article reviews existing research on the relationships between alcohol misuse; ownership, access to, and use of firearms; and the commission of firearm violence, and discusses the policy implications of these findings.\n\n\nMETHOD\nNarrative review augmented by new tabulations of publicly-available data.\n\n\nRESULTS\nAcute and chronic alcohol misuse is positively associated with firearm ownership, risk behaviors involving firearms, and risk for perpetrating both interpersonal and self-directed firearm violence. In an average month, an estimated 8.9 to 11.7 million firearm owners binge drink. For men, deaths from alcohol-related firearm violence equal those from alcohol-related motor vehicle crashes. Enforceable policies restricting access to firearms for persons who misuse alcohol are uncommon. Policies that restrict access on the basis of other risk factors have been shown to reduce risk for subsequent violence.\n\n\nCONCLUSION\nThe evidence suggests that restricting access to firearms for persons with a documented history of alcohol misuse would be an effective violence prevention measure. Restrictions should rely on unambiguous definitions of alcohol misuse to facilitate enforcement and should be rigorously evaluated.", "title": "" }, { "docid": "122e3e4c10e4e5f2779773bde106d068", "text": "In recent years, research on image generation methods has been developing fast. The auto-encoding variational Bayes method (VAEs) was proposed in 2013, which uses variational inference to learn a latent space from the image database and then generates images using the decoder. The generative adversarial networks (GANs) came out as a promising framework, which uses adversarial training to improve the generative ability of the generator. However, the generated pictures by GANs are generally blurry. The deep convolutional generative adversarial networks (DCGANs) were then proposed to leverage the quality of generated images. Since the input noise vectors are randomly sampled from a Gaussian distribution, the generator has to map from a whole normal distribution to the images. This makes DCGANs unable to reflect the inherent structure of the training data. In this paper, we propose a novel deep model, called generative adversarial networks with decoder-encoder output noise (DE-GANs), which takes advantage of both the adversarial training and the variational Bayesain inference to improve the performance of image generation. DE-GANs use a pre-trained decoder-encoder architecture to map the random Gaussian noise vectors to informative ones and pass them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained by the same images as the generators, the output vectors could carry the intrinsic distribution information of the original images. Moreover, the loss function of DE-GANs is different from GANs and DCGANs. A hidden-space loss function is added to the adversarial loss function to enhance the robustness of the model. Extensive empirical results show that DE-GANs can accelerate the convergence of the adversarial training process and improve the quality of the generated images.", "title": "" }, { "docid": "cd5210231c5fa099be6b858a3069414d", "text": "Fat grafting to the aging face has become an integral component of esthetic surgery. However, the amount of fat to inject to each area of the face is not standardized and has been based mainly on the surgeon’s experience. The purpose of this study was to perform a systematic review of injected fat volume to different facial zones. A systematic review of the literature was performed through a MEDLINE search using keywords “facial,” “fat grafting,” “lipofilling,” “Coleman technique,” “autologous fat transfer,” and “structural fat grafting.” Articles were then sorted by facial subunit and analyzed for: author(s), year of publication, study design, sample size, donor site, fat preparation technique, average and range of volume injected, time to follow-up, percentage of volume retention, and complications. Descriptive statistics were performed. Nineteen articles involving a total of 510 patients were included. Rhytidectomy was the most common procedure performed concurrently with fat injection. The mean volume of fat injected to the forehead is 6.5 mL (range 4.0–10.0 mL); to the glabellar region 1.4 mL (range 1.0–4.0 mL); to the temple 5.9 mL per side (range 2.0–10.0 mL); to the eyebrow 5.5 mL per side; to the upper eyelid 1.7 mL per side (range 1.5–2.5 mL); to the tear trough 0.65 mL per side (range 0.3–1.0 mL); to the infraorbital area (infraorbital rim to lower lid/cheek junction) 1.4 mL per side (range 0.9–3.0 mL); to the midface 1.4 mL per side (range 1.0–4.0 mL); to the nasolabial fold 2.8 mL per side (range 1.0–7.5 mL); to the mandibular area 11.5 mL per side (range 4.0–27.0 mL); and to the chin 6.7 mL (range 1.0–20.0 mL). Data on exactly how much fat to inject to each area of the face in facial fat grafting are currently limited and vary widely based on different methods and anatomical terms used. This review offers the ranges and the averages for the injected volume in each zone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "06ba6c64fd0f45f61e4c2ca20c41f9d7", "text": "About ten years ago, the eld of range searching, especially simplex range searching, was wide open. At that time, neither e cient algorithms nor nontrivial lower bounds were known for most range-searching problems. A series of papers by Haussler and Welzl [161], Clarkson [88, 89], and Clarkson and Shor [92] not only marked the beginning of a new chapter in geometric searching, but also revitalized computational geometry as a whole. Led by these and a number of subsequent papers, tremendous progress has been made in geometric range searching, both in terms of developing e cient data structures and proving nontrivial lower bounds. From a theoretical point of view, range searching is now almost completely solved. The impact of general techniques developed for geometric range searching | \"-nets, 1=rcuttings, partition trees, multi-level data structures, to name a few | is evident throughout computational geometry. This volume provides an excellent opportunity to recapitulate the current status of geometric range searching and to summarize the recent progress in this area. Range searching arises in a wide range of applications, including geographic information systems, computer graphics, spatial databases, and time-series databases. Furthermore, a variety of geometric problems can be formulated as a range-searching problem. A typical range-searching problem has the following form. Let S be a set of n points in R , and let", "title": "" }, { "docid": "6997284b9a3b8c8e7af639e92399db46", "text": "Research into rehabilitation robotics has grown rapidly and the number of therapeutic rehabilitation robots has expanded dramatically during the last two decades. Robotic rehabilitation therapy can deliver high-dosage and high-intensity training, making it useful for patients with motor disorders caused by stroke or spinal cord disease. Robotic devices used for motor rehabilitation include end-effector and exoskeleton types; herein, we review the clinical use of both types. One application of robot-assisted therapy is improvement of gait function in patients with stroke. Both end-effector and the exoskeleton devices have proven to be effective complements to conventional physiotherapy in patients with subacute stroke, but there is no clear evidence that robotic gait training is superior to conventional physiotherapy in patients with chronic stroke or when delivered alone. In another application, upper limb motor function training in patients recovering from stroke, robot-assisted therapy was comparable or superior to conventional therapy in patients with subacute stroke. With end-effector devices, the intensity of therapy was the most important determinant of upper limb motor recovery. However, there is insufficient evidence for the use of exoskeleton devices for upper limb motor function in patients with stroke. For rehabilitation of hand motor function, either end-effector and exoskeleton devices showed similar or additive effects relative to conventional therapy in patients with chronic stroke. The present evidence supports the use of robot-assisted therapy for improving motor function in stroke patients as an additional therapeutic intervention in combination with the conventional rehabilitation therapies. Nevertheless, there will be substantial opportunities for technical development in near future.", "title": "" }, { "docid": "0360bfbb47af9e661114ea8d367a166f", "text": "Critical Discourse Analysis (CDA) is discourse analytical research that primarily studies the way social-power abuse and inequality are enacted, reproduced, legitimated, and resisted by text and talk in the social and political context. With such dissident research, critical discourse analysts take an explicit position and thus want to understand, expose, and ultimately challenge social inequality. This is also why CDA may be characterized as a social movement of politically committed discourse analysts. One widespread misunderstanding of CDA is that it is a special method of doing discourse analysis. There is no such method: in CDA all methods of the cross-discipline of discourse studies, as well as other relevant methods in the humanities and social sciences, may be used (Wodak and Meyer 2008; Titscher et al. 2000). To avoid this misunderstanding and to emphasize that many methods and approaches may be used in the critical study of text and talk, we now prefer the more general term critical discourse studies (CDS) for the field of research (van Dijk 2008b). However, since most studies continue to use the well-known abbreviation CDA, this chapter will also continue to use it. As an analytical practice, CDA is not one direction of research among many others in the study of discourse. Rather, it is a critical perspective that may be found in all areas of discourse studies, such as discourse grammar, Conversation Analysis, discourse pragmatics, rhetoric, stylistics, narrative analysis, argumentation analysis, multimodal discourse analysis and social semiotics, sociolinguistics, and ethnography of communication or the psychology of discourse-processing, among others. In other words, CDA is discourse study with an attitude. Some of the tenets of CDA could already be found in the critical theory of the Frankfurt School before World War II (Agger 1992b; Drake 2009; Rasmussen and Swindal 2004). Its current focus on language and discourse was initiated with the", "title": "" }, { "docid": "52e492ff5e057a8268fd67eb515514fe", "text": "We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.", "title": "" }, { "docid": "e56bd360fe21949d0617c6e1ddafefff", "text": "This study addresses the problem of identifying the meaning of unknown words or entities in a discourse with respect to the word embedding approaches used in neural language models. We proposed a method for on-the-fly construction and exploitation of word embeddings in both the input and output layers of a neural model by tracking contexts. This extends the dynamic entity representation used in Kobayashi et al. (2016) and incorporates a copy mechanism proposed independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we construct a new task and dataset called Anonymized Language Modeling for evaluating the ability to capture word meanings while reading. Experiments conducted using our novel dataset show that the proposed variant of RNN language model outperformed the baseline model. Furthermore, the experiments also demonstrate that dynamic updates of an output layer help a model predict reappearing entities, whereas those of an input layer are effective to predict words following reappearing entities.", "title": "" }, { "docid": "51fe6376956593cb8a2e4de3b37cb8fe", "text": "The human musculoskeletal system is supposed to play an important role in doing various static and dynamic tasks. From this standpoint, some musculoskeletal humanoid robots have been developed in recent years. However, existing musculoskeletal robots did not have upper body with several DOFs to balance their bodies statically or did not have enough power to perform dynamic tasks. We think the musculoskeletal structure has two significant properties: whole-body flexibility and whole-body coordination. Using these two properties can enable us to make robots' performance better than before. In this study, we developed a humanoid robot with a musculoskeletal system that is driven by pneumatic artificial muscles. To demonstrate the robot's capability in static and dynamic tasks, we conducted two experiments. As a static task, we conducted a standing experiment using a simple feedback control and evaluated the stability by applying an impulse to the robot. As a dynamic task, we conducted a walking experiment using a feedforward controller with human muscle activation patterns and confirmed that the robot was able to perform the dynamic task.", "title": "" }, { "docid": "6e22c766fe7caaeb53251bdd9c6401e9", "text": "Task-space control of redundant robot systems based on analytical models is known to be susceptive to modeling errors. Data-driven model learning methods may present an interesting alternative approach. However, learning models for task-space tracking control from sampled data is an ill-posed problem. In particular, the same input data point can yield many different output values, which can form a nonconvex solution space. Because the problem is ill-posed, models cannot be learned from such data using common regression methods. While learning of task-space control mappings is globally ill-posed, it has been shown in recent work that it is locally a well-defined problem. In this paper, we use this insight to formulate a local kernel-based learning approach for online model learning for task-space tracking control. We propose a parametrization for the local model, which makes an application in task-space tracking control of redundant robots possible. The model parametrization further allows us to apply the kernel-trick and, therefore, enables a formulation within the kernel learning framework. In our evaluations, we show the ability of the method for online model learning for task-space tracking control of redundant robots.", "title": "" }, { "docid": "2eb5b8c0626ccce0121d8d3f9e01d274", "text": "Like full-text translation, cross-language information retrieval (CLIR) is a task that requires some form of knowledge transfer across languages. Although robust translation resources are critical for constructing high quality translation tools, manually constructed resources are limited both in their coverage and in their adaptability to a wide range of applications. Automatic mining of translingual knowledge makes it possible to complement hand-curated resources. This chapter describes a growing body of work that seeks to mine translingual knowledge from text data, in particular, data found on the Web. We review a number of mining and filtering strategies, and consider them in the context of statistical machine translation, showing that these techniques can be effective in collecting large quantities of translingual knowledge necessary", "title": "" }, { "docid": "1c9c93d1eff3904941516516a6390cdf", "text": "BACKGROUND\nSyndesmosis sprains can contribute to chronic pain and instability, which are often indications for surgical intervention. The literature lacks sufficient objective data detailing the complex anatomy and localized osseous landmarks essential for current surgical techniques.\n\n\nPURPOSE\nTo qualitatively and quantitatively analyze the anatomy of the 3 syndesmotic ligaments with respect to surgically identifiable bony landmarks.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nSixteen ankle specimens were dissected to identify the anterior inferior tibiofibular ligament (AITFL), posterior inferior tibiofibular ligament (PITFL), interosseous tibiofibular ligament (ITFL), and bony anatomy. Ligament lengths, footprints, and orientations were measured in reference to bony landmarks by use of an anatomically based coordinate system and a 3-dimensional coordinate measuring device.\n\n\nRESULTS\nThe syndesmotic ligaments were identified in all specimens. The pyramidal-shaped ITFL was the broadest, originating from the distal interosseous membrane expansion, extending distally, and terminating 9.3 mm (95% CI, 8.3-10.2 mm) proximal to the central plafond. The tibial cartilage extended 3.6 mm (95% CI, 2.8-4.4 mm) above the plafond, a subset of which articulated directly with the fibular cartilage located 5.2 mm (95% CI, 4.6-5.8 mm) posterior to the anterolateral corner of the tibial plafond. The primary AITFL band(s) originated from the tibia 9.3 mm (95% CI, 8.6-10.0 mm) superior and medial to the anterolateral corner of the tibial plafond and inserted on the fibula 30.5 mm (95% CI, 28.5-32.4 mm) proximal and anterior to the inferior tip of the lateral malleolus. Superficial fibers of the PITFL originated along the distolateral border of the posterolateral tubercle of the tibia 8.0 mm (95% CI, 7.5-8.4 mm) proximal and medial to the posterolateral corner of the plafond and inserted along the medial border of the peroneal groove 26.3 mm (95% CI, 24.5-28.1 mm) superior and posterior to the inferior tip of the lateral malleolus.\n\n\nCONCLUSION\nThe qualitative and quantitative anatomy of the syndesmotic ligaments was reproducibly described and defined with respect to surgically identifiable bony prominences.\n\n\nCLINICAL RELEVANCE\nData regarding anatomic attachment sites and distances to bony prominences can optimize current surgical fixation techniques, improve anatomic restoration, and reduce the risk of iatrogenic injury from malreduction or misplaced implants. Quantitative data also provide the consistency required for the development of anatomic reconstructions.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8c47d9a93e3b9d9f31b77b724bf45578", "text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.", "title": "" }, { "docid": "87f3c12df54f395b9a24ccfc4dd10aa8", "text": "The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.", "title": "" }, { "docid": "879282128be8b423114401f6ec8baf8a", "text": "Yelp is one of the largest online searching and reviewing systems for kinds of businesses, including restaurants, shopping, home services et al. Analyzing the real world data from Yelp is valuable in acquiring the interests of users, which helps to improve the design of the next generation system. This paper targets the evaluation of Yelp dataset, which is provided in the Yelp data challenge. A bunch of interesting results are found. For instance, to reach any one in the Yelp social network, one only needs 4.5 hops on average, which verifies the classical six degree separation theory; Elite user mechanism is especially effective in maintaining the healthy of the whole network; Users who write less than 100 business reviews dominate. Those insights are expected to be considered by Yelp to make intelligent business decisions in the future.", "title": "" }, { "docid": "70fafdedd05a40db5af1eabdf07d431c", "text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.", "title": "" } ]
scidocsrr
91fc47e6263131bd21e7748f7a2b49fa
BitScope: Automatically Dissecting Malicious Binaries
[ { "docid": "f1f0c6518a34c0938e65e4de2b5ca7c0", "text": "Disassembly is the process of recovering a symbolic representation of a program’s machine code instructions from its binary representation. Recently, a number of techniques have been proposed that attempt to foil the disassembly process. These techniques are very effective against state-of-the-art disassemblers, preventing a substantial fraction of a binary program from being disassembled correctly. This could allow an attacker to hide malicious code from static analysis tools that depend on correct disassembler output (such as virus scanners). The paper presents novel binary analysis techniques that substantially improve the success of the disassembly process when confronted with obfuscated binaries. Based on control flow graph information and statistical methods, a large fraction of the program’s instructions can be correctly identified. An evaluation of the accuracy and the performance of our tool is provided, along with a comparison to several state-of-the-art disassemblers.", "title": "" } ]
[ { "docid": "bd7664e9ff585a48adca12c0a8d9bf95", "text": "Fueled by the widespread adoption of sensor-enabled smartphones, mobile crowdsourcing is an area of rapid innovation. Many crowd-powered sensor systems are now part of our daily life -- for example, providing highway congestion information. However, participation in these systems can easily expose users to a significant drain on already limited mobile battery resources. For instance, the energy burden of sampling certain sensors (such as WiFi or GPS) can quickly accumulate to levels users are unwilling to bear. Crowd system designers must minimize the negative energy side-effects of participation if they are to acquire and maintain large-scale user populations.\n To address this challenge, we propose Piggyback CrowdSensing (PCS), a system for collecting mobile sensor data from smartphones that lowers the energy overhead of user participation. Our approach is to collect sensor data by exploiting Smartphone App Opportunities -- that is, those times when smartphone users place phone calls or use applications. In these situations, the energy needed to sense is lowered because the phone need no longer be woken from an idle sleep state just to collect data. Similar savings are also possible when the phone either performs local sensor computation or uploads the data to the cloud. To efficiently use these sporadic opportunities, PCS builds a lightweight, user-specific prediction model of smartphone app usage. PCS uses this model to drive a decision engine that lets the smartphone locally decide which app opportunities to exploit based on expected energy/quality trade-offs.\n We evaluate PCS by analyzing a large-scale dataset (containing 1,320 smartphone users) and building an end-to-end crowdsourcing application that constructs an indoor WiFi localization database. Our findings show that PCS can effectively collect large-scale mobile sensor datasets (e.g., accelerometer, GPS, audio, image) from users while using less energy (up to 90% depending on the scenario) compared to a representative collection of existing approaches.", "title": "" }, { "docid": "398b72faa5922bd7af153f055c6344b5", "text": "As a key component of a plug-in hybrid electric vehicle (PHEV) charger system, the front-end ac-dc converter must achieve high efficiency and power density. This paper presents a topology survey evaluating topologies for use in front end ac-dc converters for PHEV battery chargers. The topology survey is focused on several boost power factor corrected converters, which offer high efficiency, high power factor, high density, and low cost. Experimental results are presented and interpreted for five prototype converters, converting universal ac input voltage to 400 V dc. The results demonstrate that the phase shifted semi-bridgeless PFC boost converter is ideally suited for automotive level I residential charging applications in North America, where the typical supply is limited to 120 V and 1.44 kVA or 1.92 kVA. For automotive level II residential charging applications in North America and Europe the bridgeless interleaved PFC boost converter is an ideal topology candidate for typical supplies of 240 V, with power levels of 3.3 kW, 5 kW, and 6.6 kW.", "title": "" }, { "docid": "cca664cf201c79508a266a34646dba01", "text": "Scholars have argued that online social networks and personalized web search increase ideological segregation. We investigate the impact of these potentially polarizing channels on news consumption by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that individuals indeed exhibit substantially higher segregation when reading articles shared on social networks or returned by search engines, a pattern driven by opinion pieces. However, these polarizing articles from social media and web search constitute only 2% of news consumption. Consequently, while recent technological changes do increase ideological segregation, the magnitude of the effect is limited. JEL: D83, L86, L82", "title": "" }, { "docid": "3df9bacf95281fc609ee7fd2d4724e91", "text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.", "title": "" }, { "docid": "99bed553411303f4800315ce5dff2139", "text": "In this work, we propose contextual language models that incorporate dialog level discourse information into language modeling. Previous works on contextual language model treat preceding utterances as a sequence of inputs, without considering dialog interactions. We design recurrent neural network (RNN) based contextual language models that specially track the interactions between speakers in a dialog. Experiment results on Switchboard Dialog Act Corpus show that the proposed model outperforms conventional single turn based RNN language model by 3.3% on perplexity. The proposed models also demonstrate advantageous performance over other competitive contextual language models.", "title": "" }, { "docid": "6ca7eafb36eebd1d14217b78660c40e0", "text": "The identification of the candidate genes for autism through linkage and association studies has proven to be a difficult enterprise. An alternative approach is the analysis of cytogenetic abnormalities associated with autism. We present a review of all studies to date that relate patients with cytogenetic abnormalities to the autism phenotype. A literature survey of the Medline and Pubmed databases was performed, using multiple keyword searches. Additional searches through cited references and abstracts from the major genetic conferences from 2000 onwards completed the search. The quality of the phenotype (i.e. of the autism spectrum diagnosis) was rated for each included case. Available specific probe and marker information was used to define optimally the boundaries of the cytogenetic abnormalities. In case of recurrent deletions or duplications on chromosome 15 and 22, the positions of the low copy repeats that are thought to mediate these rearrangements were used to define the most likely boundaries of the implicated ‘Cytogenetic Regions Of Interest’ (CROIs). If no molecular data were available, the sequence position of the relevant chromosome bands was used to obtain the approximate molecular boundaries of the CROI. The findings of the current review indicate: (1) several regions of overlap between CROIs and known loci of significant linkage and/or association findings, and (2) additional regions of overlap among multiple CROIs at the same locus. Whereas the first finding confirms previous linkage/association findings, the latter may represent novel, not previously identified regions containing genes that contribute to autism. This analysis not only has confirmed the presence of several known autism risk regions but has also revealed additional previously unidentified loci, including 2q37, 5p15, 11q25, 16q22.3, 17p11.2, 18q21.1, 18q23, 22q11.2, 22q13.3 and Xp22.2–p22.3.", "title": "" }, { "docid": "096b09f064643cbd2cd80f310981c5a6", "text": "A Ku-band 200-W pulsed solid-state power amplifier has been presented and designed by using a hybrid radial-/rectangular-waveguide spatially power-combining technique. The hybrid radial-/rectangular-waveguide power-dividing/power-combining circuit employed in this design provides not only a high power-combining efficiency over a wide bandwidth but also efficient heat sinking for the active power devices. A simple design approach of the presented power-dividing/power-combining structure has been developed. The measured small-signal gain of the pulsed power amplifier is about 51.3 dB over the operating frequency range, while the measured maximum output power at 1-dB compression is 209 W at 13.9 GHz, with an active power-combining efficiency of about 91%. Furthermore, the active power-combining efficiency is greater than 82% from 13.75 to 14.5 GHz.", "title": "" }, { "docid": "8abedc8a3f3ad84c940e38735b759745", "text": "Degeneration is a senescence process that occurs in all living organisms. Although tremendous efforts have been exerted to alleviate this degenerative tendency, minimal progress has been achieved to date. The nematode, Caenorhabditis elegans (C. elegans), which shares over 60% genetic similarities with humans, is a model animal that is commonly used in studies on genetics, neuroscience, and molecular gerontology. However, studying the effect of exercise on C. elegans is difficult because of its small size unlike larger animals. To this end, we fabricated a flow chamber, called \"worm treadmill,\" to drive worms to exercise through swimming. In the device, the worms were oriented by electrotaxis on demand. After the exercise treatment, the lifespan, lipofuscin, reproductive capacity, and locomotive power of the worms were analyzed. The wild-type and the Alzheimer's disease model strains were utilized in the assessment. Although degeneration remained irreversible, both exercise-treated strains indicated an improved tendency compared with their control counterparts. Furthermore, low oxidative stress and lipofuscin accumulation were also observed among the exercise-treated worms. We conjecture that escalated antioxidant enzymes imparted the worms with an extra capacity to scavenge excessive oxidative stress from their bodies, which alleviated the adverse effects of degeneration. Our study highlights the significance of exercise in degeneration from the perspective of the simple life form, C. elegans.", "title": "" }, { "docid": "99e47a88f0950c1928557857facb35d5", "text": "We present the NBA framework, which extends the architecture of the Click modular router to exploit modern hardware, adapts to different hardware configurations, and reaches close to their maximum performance without manual optimization. NBA takes advantages of existing performance-excavating solutions such as batch processing, NUMA-aware memory management, and receive-side scaling with multi-queue network cards. Its abstraction resembles Click but also hides the details of architecture-specific optimization, batch processing that handles the path diversity of individual packets, CPU/GPU load balancing, and complex hardware resource mappings due to multi-core CPUs and multi-queue network cards. We have implemented four sample applications: an IPv4 and an IPv6 router, an IPsec encryption gateway, and an intrusion detection system (IDS) with Aho-Corasik and regular expression matching. The IPv4/IPv6 router performance reaches the line rate on a commodity 80 Gbps machine, and the performances of the IPsec gateway and the IDS reaches above 30 Gbps. We also show that our adaptive CPU/GPU load balancer reaches near-optimal throughput in various combinations of sample applications and traffic conditions.", "title": "" }, { "docid": "063287a98a5a45bc8e38f8f8c193990e", "text": "This paper investigates the relationship between the contextual factors related to the firm’s decision-maker and the process of international strategic decision-making. The analysis has been conducted focusing on small and medium-sized enterprises (SME). Data for the research came from 111 usable responses to a survey on a sample of SME decision-makers in international field. The results of regression analysis indicate that the context variables, both internal and external, exerted more influence on international strategic decision making process than the decision-maker personality characteristics. DOI: 10.4018/ijabe.2013040101 2 International Journal of Applied Behavioral Economics, 2(2), 1-22, April-June 2013 Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. The purpose of this paper is to reverse this trend and to explore the different dimensions of SMEs’ strategic decision-making process in international decisions and, within these dimensions, we want to understand if are related to the decision-maker characteristics and also to broader contextual factors characteristics. The paper is organized as follows. In the second section the concepts of strategic decision-making process and factors influencing international SDMP are approached. Next, the research methodology, findings analysis and discussion will be presented. Finally, conclusions, limitations of the study and suggestions for future research are explored. THEORETICAL BACKGROUND Strategic Decision-Making Process The process of making strategic decisions has emerged as one of the most important themes of strategy research over the last two decades (Papadakis, 2006; Papadakis & Barwise, 2002). According to Harrison (1996), the SMDP can be defined as a combination of the concepts of strategic gap and management decision making process, with the former “determined by comparing the organization’s inherent capabilities with the opportunities and threats in its external environment”, while the latter is composed by a set of decision-making functions logically connected, that begins with the setting of managerial objective, followed by the search for information to develop a set of alternatives, that are consecutively compared and evaluated, and selected. Afterward, the selected alternative is implemented and, finally, it is subjected to follow-up and control. Other authors (Fredrickson, 1984; Mintzberg, Raisinghani, & Theoret, 1976) developed several models of strategic decision-making process since 1970, mainly based on the number of stages (Nooraie, 2008; Nutt, 2008). Although different researches investigated SDMP with specific reference to either small firms (Brouthers, et al., 1998; Gibcus, Vermeulen, & Jong, 2009; Huang, 2009; Jocumsen, 2004), or internationalization process (Aharoni, Tihanyi, & Connelly, 2011; Dimitratos, et al., 2011; Nielsen & Nielsen, 2011), there is a lack of studies that examine the SDMP in both perspectives. In this study we decided to mainly follow the SDMP defined by Harrison (1996) adapted to the international arena and particularly referred to market development decisions. Thus, for the definition of objectives (first phase) we refer to those in international field, for search for information, development and comparison of alternatives related to foreign markets (second phase) we refer to the systematic International Market Selection (IMS), and to the Entry Mode Selection (EMS) methodologies. For the implementation of the selected alternative (third phase) we mainly mean the entering in a particular foreign market with a specific entry mode, and finally, for follow-up and control (fourth phase) we refer to the control and evaluation of international activities. Dimensions of the Strategic Decision-Making Process Several authors attempted to implement a set of dimensions in approaching strategic process characteristics, and the most adopted are: • Rationality; • Formalization; • Hierarchical Decentralization and lateral communication; • Political Behavior.", "title": "" }, { "docid": "c841938f03a07fffc5150fbe18f8f740", "text": "Ensemble modeling is now a well-established means for improving prediction accuracy; it enables you to average out noise from diverse models and thereby enhance the generalizable signal. Basic stacked ensemble techniques combine predictions from multiple machine learning algorithms and use these predictions as inputs to second-level learning models. This paper shows how you can generate a diverse set of models by various methods such as forest, gradient boosted decision trees, factorization machines, and logistic regression and then combine them with stacked-ensemble techniques such as hill climbing, gradient boosting, and nonnegative least squares in SAS Visual Data Mining and Machine Learning. The application of these techniques to real-world big data problems demonstrates how using stacked ensembles produces greater prediction accuracy and robustness than do individual models. The approach is powerful and compelling enough to alter your initial data mining mindset from finding the single best model to finding a collection of really good complementary models. It does involve additional cost due both to training a large number of models and the proper use of cross validation to avoid overfitting. This paper shows how to efficiently handle this computational expense in a modern SAS environment and how to manage an ensemble workflow by using parallel computation in a distributed framework.", "title": "" }, { "docid": "ec1f585fbb97c8e6468dd992e1a933ff", "text": "Scientists continue to find challenges in the ever increasing amount of information that has been produced on a world wide scale, during the last decades. When writing a paper, an author searches for the most relevant citations that started or were the foundation of a particular topic, which would very likely explain the thinking or algorithms that are employed. The search is usually done using specific keywords submitted to literature search engines such as Google Scholar and CiteSeer. However, finding relevant citations is distinctive from producing articles that are only topically similar to an author's proposal. In this paper, we address the problem of citation recommendation using a singular value decomposition approach. The models are trained and evaluated on the Citeseer digital library. The results of our experiments show that the proposed approach achieves significant success when compared with collaborative filtering methods on the citation recommendation task.", "title": "" }, { "docid": "136fadcc21143fd356b48789de5fb2b0", "text": "Cost-effective and scalable wireless backhaul solutions are essential for realizing the 5G vision of providing gigabits per second anywhere. Not only is wireless backhaul essential to support network densification based on small cell deployments, but also for supporting very low latency inter-BS communication to deal with intercell interference. Multiplexing backhaul and access on the same frequency band (in-band wireless backhaul) has obvious cost benefits from the hardware and frequency reuse perspective, but poses significant technology challenges. We consider an in-band solution to meet the backhaul and inter-BS coordination challenges that accompany network densification. Here, we present an analysis to persuade the readers of the feasibility of in-band wireless backhaul, discuss realistic deployment and system assumptions, and present a scheduling scheme for inter- BS communications that can be used as a baseline for further improvement. We show that an inband wireless backhaul for data backhauling and inter-BS coordination is feasible without significantly hurting the cell access capacities.", "title": "" }, { "docid": "89725ed15bec80072198dbab9f6f75eb", "text": "OBJECTIVE\nTo present the clinical and roentgenographic features of caudal duplication syndrome.\n\n\nDESIGN\nRetrospective review of the medical records and all available imaging studies.\n\n\nSETTING\nTwo university-affiliated teaching hospitals.\n\n\nPARTICIPANTS\nSix children with multiple anomalies and duplications of distal organs derived from the hindgut, neural tube, and adjacent mesoderm.\n\n\nINTERVENTIONS\nNone.\n\n\nRESULTS\nSpinal anomalies (myelomeningocele in two patients, sacral duplication in three, diplomyelia in two, and hemivertebrae in one) were present in all our patients. Duplications or anomalies of the external genitalia and/or the lower urinary and reproductive structures were also seen in all our patients. Ventral herniation (in one patient), intestinal obstructions (in one patient), and bowel duplications (in two patients) were the most common gastrointestinal abnormalities.\n\n\nCONCLUSIONS\nWe believe that the above constellation of abnormalities resulted from an insult to the caudal cell mass and hindgut at approximately the 23rd through the 25th day of gestation. We propose the term caudal duplication syndrome to describe the association between gastrointestinal, genitourinary, and distal neural tube malformations.", "title": "" }, { "docid": "a5a86ecd39df5b032f4fa4f22362c914", "text": "Diet strongly affects human health, partly by modulating gut microbiome composition. We used diet inventories and 16S rDNA sequencing to characterize fecal samples from 98 individuals. Fecal communities clustered into enterotypes distinguished primarily by levels of Bacteroides and Prevotella. Enterotypes were strongly associated with long-term diets, particularly protein and animal fat (Bacteroides) versus carbohydrates (Prevotella). A controlled-feeding study of 10 subjects showed that microbiome composition changed detectably within 24 hours of initiating a high-fat/low-fiber or low-fat/high-fiber diet, but that enterotype identity remained stable during the 10-day study. Thus, alternative enterotype states are associated with long-term diet.", "title": "" }, { "docid": "2be35e0e63316137b3426fffd397111c", "text": "Face detection is essential to facial analysis tasks, such as facial reenactment and face recognition. Both cascade face detectors and anchor-based face detectors have translated shining demos into practice and received intensive attention from the community. However, cascade face detectors often suffer from a low detection accuracy, while anchor-based face detectors rely heavily on very large neural networks pre-trained on large-scale image classification datasets such as ImageNet, which is not computationally efficient for both training and deployment. In this paper, we devise an efficient anchor-based cascade framework called anchor cascade. To improve the detection accuracy by exploring contextual information, we further propose a context pyramid maxout mechanism for anchor cascade. As a result, anchor cascade can train very efficient face detection models with a high detection accuracy. Specifically, compared with a popular convolutional neural network (CNN)-based cascade face detector MTCNN, our anchor cascade face detector greatly improves the detection accuracy, e.g., from 0.9435 to 0.9704 at $1k$ false positives on FDDB, while it still runs in comparable speed. Experimental results on two widely used face detection benchmarks, FDDB and WIDER FACE, demonstrate the effectiveness of the proposed framework.", "title": "" }, { "docid": "ecd99c9f87e1c5e5f529cb5fcbb206f2", "text": "The concept of supply chain is about managing coordinated information and material flows, plant operations, and logistics. It provides flexibility and agility in responding to consumer demand shifts without cost overlays in resource utilization. The fundamental premise of this philosophy is; synchronization among multiple autonomous business entities represented in it. That is, improved coordination within and between various supply-chain members. Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decision-making processes, and improvement in the overall performance of each member as well as the supply chain. Describes architecture to create the appropriate structure, install proper controls, and implement principles of optimization to synchronize the supply chain. A supply-chain model based on a collaborative system approach is illustrated utilizing the example of the textile industry. process flexibility and coordination of processes across many sites. More and more organizations are promoting employee empowerment and the need for rules-based, real-time decision support systems to attain organizational and process flexibility, as well as to respond to competitive pressure to introduce new products more quickly, cheaply and of improved quality. The underlying philosophy of managing supply chains has evolved to respond to these changing business trends. Supply-chain management phenomenon has received the attention of researchers and practitioners in various topics. In the earlier years, the emphasis was on materials planning utilizing materials requirements planning techniques, inventory logistics management with one warehouse multi-retailer distribution system, and push and pull operation techniques for production systems. In the last few years, however, there has been a renewed interest in designing and implementing integrated systems, such as enterprise resource planning, multi-echelon inventory, and synchronous-flow manufacturing, respectively. A number of factors have contributed to this shift. First, there has been a realization that better planning and management of complex interrelated systems, such as materials planning, inventory management, capacity planning, logistics, and production systems will lead to overall improvement in enterprise productivity. Second, advances in information and communication technologies complemented by sophisticated decision support systems enable the designing, implementing and controlling of the strategic and tactical strategies essential to delivery of integrated systems. In the next section, a framework that offers an unified approach to dealing with enterprise related problems is presented. A framework for analysis of enterprise integration issues As mentioned in the preceding section, the availability of advanced production and logistics management systems has the potential of fundamentally influencing enterprise integration issues. The motivation in pursuing research issues described in this paper is to propose a framework that enables dealing with these effectively. The approach suggested in this paper utilizing supply-chain philosophy for enterprise integration proposes domain independent problem solving and modeling, and domain dependent analysis and implementation. The purpose of the approach is to ascertain characteristics of the problem independent of the specific problem environment. Consequently, the approach delivers solution(s) or the solution method that are intrinsic to the problem and not its environment. Analysis methods help to understand characteristics of the solution methodology, as well as providing specific guarantees of effectiveness. Invariably, insights gained from these analyses can be used to develop effective problem solving tools and techniques for complex enterprise integration problems. The discussion of the framework is organized as follows. First, the key guiding principles of the proposed framework on which a supply chain ought to be built are outlined. Then, a cooperative supply-chain (CSC) system is described as a special class of a supply-chain network implementation. Next, discussion on a distributed problemsolving strategy that could be employed in integrating this type of system is presented. Following this, key components of a CSC system are described. Finally, insights on modeling a CSC system are offered. Key modeling principles are elaborated through two distinct modeling approaches in the management science discipline. Supply chain guiding principles Firms have increasingly been adopting enterprise/supply-chain management techniques in order to deal with integration issues. To focus on these integration efforts, the following guiding principles for the supply-chain framework are proposed. These principles encapsulate trends in production and logistics management that a supplychain arrangement may be designed to capture. . Supply chain is a cooperative system. The supply-chain arrangement exists on cooperation among its members. Cooperation occurs in many forms, such as sharing common objectives and goals for the group entity; utilizing joint policies, for instance in marketing and production; setting up common budgets, cost and price structures; and identifying commitments on capacity, production plans, etc. . Supply chain exists on the group dynamics of its members. The existence of a supply chain is dependent on the interaction among its members. This interaction occurs in the form of exchange of information with regard to input, output, functions and controls, such as objectives and goals, and policies. By analyzing this [ 291 ] Charu Chandra and Sameer Kumar Enterprise architectural framework for supply-chain integration Industrial Management & Data Systems 101/6 [2001] 290±303 information, members of a supply chain may choose to modify their behavior attuned with group expectations. . Negotiation and compromise are norms of operation in a supply chain. In order to realize goals and objectives of the group, members negotiate on commitments made to one another for price, capacity, production plans, etc. These negotiations often lead to compromises by one or many members on these issues, leading up to realization of sub-optimal goals and objectives by members. . Supply-chain system solutions are Paretooptimal (satisficing), not optimizing. Supply-chain problems similar to many real world applications involve several objective functions of its members simultaneously. In all such applications, it is extremely rare to have one feasible solution that simultaneously optimizes all of the objective functions. Typically, optimizing one of the objective functions has the effect of moving another objective function away from its most desirable value. These are the usual conflicts among the objective functions in the multiobjective models. As a multi-objective problem, the supply-chain model produces non-dominated or Pareto-optimal solutions. That is, solutions for a supplychain problem do not leave any member worse-off at the expense of another. . Integration in supply chain is achieved through synchronization. Integration across the supply chain is achieved through synchronization of activities at the member entity and aggregating its impact through process, function, business, and on to enterprise levels, either at the member entity or the group entity. Thus, by synchronization of supply-chain components, existing bottlenecks in the system are eliminated, while future ones are prevented from occurring. A cooperative supply-chain A supply-chain network depicted in Figure 1 can be a complex web of systems, sub-systems, operations, activities, and their relationships to one another, belonging to its various members namely, suppliers, carriers, manufacturing plants, distribution centers, retailers, and consumers. The design, modeling and implementation of such a system, therefore, can be difficult, unless various parts of it are cohesively tied to the whole. The concept of a supply-chain is about managing coordinated information and material flows, plant operations, and logistics through a common set of principles, strategies, policies, and performance metrics throughout its developmental life cycle (Lee and Billington, 1993). It provides flexibility and agility in responding to consumer demand shifts with minimum cost overlays in resource utilization. The fundamental premise of this philosophy is synchronization among multiple autonomous entities represented in it. That is, improved coordination within and between various supply-chain members. Coordination is achieved within the framework of commitments made by members to each other. Members negotiate and compromise in a spirit of cooperation in order to meet these commitments. Hence, the label(CSC). Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decisionmaking processes, and improvement in the overall performance of each member, as well as the supply-chain (group) (Chandra, 1997; Poirier, 1999; Tzafastas and Kapsiotis, 1994). A generic textile supply chain has for its primary raw material vendors, cotton growers and/or chemical suppliers, depending upon whether the end product is cotton, polyester or some combination of cotton and polyester garment. Secondary raw material vendors are suppliers of accessories such as, zippers, buttons, thread, garment tags, etc. Other tier suppliers in the complete pipeline are: fiber manufacturers for producing the polyester or cotton fiber yarn; textile manufacturers for weaving and dying yarn into colored textile fabric; an apparel maker for cutting, sewing and packing the garment; a distribution center for merchandising the garment; and a retailer selling the brand name garment to consumers at a shopping mall or center. Synchronization of the textile supply chain is achieved through coordination primarily of: . replenishment schedules that have be", "title": "" }, { "docid": "3fffd4317116d8ff0165916681ce1c46", "text": "The challenges of Machine Reading and Knowledge Extraction at a web scale require a system capable of extracting diverse information from large, heterogeneous corpora. The Open Information Extraction (OIE) paradigm aims at extracting assertions from large corpora without requiring a vocabulary or relation-specific training data. Most systems built on this paradigm extract binary relations from arbitrary sentences, ignoring the context under which the assertions are correct and complete. They lack the expressiveness needed to properly represent and extract complex assertions commonly found in the text. To address the lack of representation power, we propose NESTIE, which uses a nested representation to extract higher-order relations, and complex, interdependent assertions. Nesting the extracted propositions allows NESTIE to more accurately reflect the meaning of the original sentence. Our experimental study on real-world datasets suggests that NESTIE obtains comparable precision with better minimality and informativeness than existing approaches. NESTIE produces 1.7-1.8 times more minimal extractions and achieves 1.1-1.2 times higher informativeness than CLAUSIE.", "title": "" } ]
scidocsrr
e8dc792a00fbb4b8f024f2d4b08791a2
Robust Camera Calibration and Player Tracking in Broadcast Basketball Video
[ { "docid": "f48e6475c0afeac09262cdc2f5681208", "text": "Semantic analysis of sport sequences requires camera calibration to obtain player and ball positions in real-world coordinates. For court sports like tennis, the marker lines on the field can be used to determine the calibration parameters. We propose a real-time calibration algorithm that can be applied to all court sports simply by exchanging the court model. The algorithm is based on (1) a specialized court-line detector, (2) a RANSAC-based line parameter estimation, (3) a combinatorial optimization step to localize the court within the set of detected line segments, and (4) an iterative court-model tracking step. Our results show real-time calibration of, e.g., tennis and soccer sequences with a computation time of only about 6 ms per frame.", "title": "" }, { "docid": "cfadde3d2e6e1d6004e6440df8f12b5a", "text": "We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses the line markings of the court for calibration and it can be applied to a variety of different sports since the geometric model of the court can be specified by the user. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture restrictions. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the following input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.", "title": "" }, { "docid": "bc4791523b11a235d0b1c9e660ea1139", "text": "In this paper, we present a novel system and effective algorithms for soccer video segmentation. The output, about whether the ball is in play, reveals high-level structure of the content. The first step is to classify each sample frame into 3 kinds of view using a unique domain-specific feature, grass-area-ratio. Here the grass value and classification rules are learned and automatically adjusted to each new clip. Then heuristic rules are used in processing the view label sequence, and obtain play/break status of the game. The results provide good basis for detailed content analysis in next step. We also show that lowlevel features and mid-level view classes can be combined to extract more information about the game, via the example of detecting grass orientation in the field. The results are evaluated under different metrics intended for different applications; the best result in segmentation is 86.5%.", "title": "" } ]
[ { "docid": "9bbc279974aaa899d12fee26948ce029", "text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.", "title": "" }, { "docid": "8183fe0c103e2ddcab5b35549ed8629f", "text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.", "title": "" }, { "docid": "05d3d0d62d2cff27eace1fdfeecf9814", "text": "This article solves the equilibrium problem in a pure-exchange, continuous-time economy in which some agents face information costs or other types of frictions effectively preventing them from investing in the stock market. Under the assumption that the restricted agents have logarithmic utilities, a complete characterization of equilibrium prices and consumption/ investment policies is provided. A simple calibration shows that the model can help resolve some of the empirical asset pricing puzzles.", "title": "" }, { "docid": "d2eb6c8dc6a3dd475248582361e89284", "text": "In the last few years, uncertainty management has come to be recognized as a fundamental aspect of data integration. It is now accepted that it may not be possible to remove uncertainty generated during data integration processes and that uncertainty in itself may represent a source of relevant information. Several issues, such as the aggregation of uncertain mappings and the querying of uncertain mediated schemata, have been addressed by applying well-known uncertainty management theories. However, several problems lie unresolved. This article sketches an initial picture of this highly active research area; it details existing works in the light of a homogeneous framework, and identifies and discusses the leading issues awaiting solutions.", "title": "" }, { "docid": "95612aa090b77fc660279c5f2886738d", "text": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.", "title": "" }, { "docid": "21324c71d70ca79d2f2c7117c759c915", "text": "The wide-spread of social media provides unprecedented sources of written language that can be used to model and infer online demographics. In this paper, we introduce a novel visual text analytics system, DemographicVis, to aid interactive analysis of such demographic information based on user-generated content. Our approach connects categorical data (demographic information) with textual data, allowing users to understand the characteristics of different demographic groups in a transparent and exploratory manner. The modeling and visualization are based on ground truth demographic information collected via a survey conducted on Reddit.com. Detailed user information is taken into our modeling process that connects the demographic groups with features that best describe the distinguishing characteristics of each group. Features including topical and linguistic are generated from the user-generated contents. Such features are then analyzed and ranked based on their ability to predict the users' demographic information. To enable interactive demographic analysis, we introduce a web-based visual interface that presents the relationship of the demographic groups, their topic interests, as well as the predictive power of various features. We present multiple case studies to showcase the utility of our visual analytics approach in exploring and understanding the interests of different demographic groups. We also report results from a comparative evaluation, showing that the DemographicVis is quantitatively superior or competitive and subjectively preferred when compared to a commercial text analysis tool.", "title": "" }, { "docid": "ae45ce27587d855735b3e8e67785f17b", "text": "Word sense disambiguation has been recognized as a major problem in natural language processing research for over forty years. Both quantitive and qualitative methods have been tried, but much of this work has been stymied by difficulties in acquiring appropriate lexical resources, such as semantic networks and annotated corpora. In particular, much of the work on qualitative methods has had to focus on ‘‘toy’’ domains since currently available semantic networks generally lack broad coverage. Similarly, much of the work on quantitative methods has had to depend on small amounts of hand-labeled text for testing and training. We have achieved considerable progress recently by taking advantage of a new source of testing and training materials. Rather than depending on small amounts of hand-labeled text, we have been making use of relatively large amounts of parallel text, text such as the Canadian Hansards, which are available in multiple languages. The translation can often be used in lieu of hand-labeling. For example, consider the polysemous word sentence, which has two major senses: (1) a judicial sentence, and (2), a syntactic sentence. We can collect a number of sense (1) examples by extracting instances that are translated as peine, and we can collect a number of sense (2) examples by extracting instances that are translated as phrase. In this way, we have been able to acquire a considerable amount of testing and training material for developing and testing our disambiguation algorithms. The availability of this testing and training material has enabled us to develop quantitative disambiguation methods that achieve 92 percent accuracy in discriminating between two very distinct senses of a noun such as sentence. In the training phase, we collect a number of instances of each sense of the polysemous noun. Then in the testing phase, we are given a new instance of the noun, and are asked to assign the instance to one of the senses. We attempt to answer this question by comparing the context of the unknown instance with contexts of known instances using a Bayesian argument that has been applied successfully in related tasks such as author identification and information retrieval. The Bayesian classifier requires an estimate of Pr(wsense), the probability of finding the word w in a particular context. Care must be taken in estimating these probabilities since there are so many parameters (e.g., 100,000 for each sense) and so little training material (e.g., 5,000 words for each sense). We have found that it helps to smooth the estimates obtained from the training material with estimates obtained from the entire corpus. The idea is that the training material provides poorly measured estimates, whereas the entire corpus provides less relevant estimates. We seek a trade-off between measurement errors and relevance using a novel interpolation procedure that has one free parameter, an estimate of how much the conditional probabilities Pr(wsense) will differ from the global probabilities Pr(w). In the sense tagging application, we expect quite large differences, more than 20% of the vocabulary behaves very differently in the conditional context; in other applications such as author identification, we expect much smaller differences and find that less than 2% of the vocabulary depends very much on the author. The ‘‘sense disambiguation’’ problem covers a broad set of issues. Dictionaries, for example, make use of", "title": "" }, { "docid": "2baf55123171c6e2110b19b1583c3d17", "text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.", "title": "" }, { "docid": "28c22ea34762a7bf65fdc50a37b558f5", "text": "Web threats pose the most significant cyber threat. Websites have been developed or manipulated by attackers for use as attack tools. Existing malicious website detection techniques can be classified into the categories of static and dynamic detection approaches, which respectively aim to detect malicious websites by analyzing web contents, and analyzing run-time behaviors using honeypots. However, existing malicious website detection approaches have technical and computational limitations to detect sophisticated attacks and analyze massive collected data. The main objective of this research is to minimize the limitations of malicious website detection. This paper presents a novel cross-layer malicious website detection approach which analyzes network-layer traffic and application-layer website contents simultaneously. Detailed data collection and performance evaluation methods are also presented. Evaluation based on data collected during 37 days shows that the computing time of the cross-layer detection is 50 times faster than the dynamic approach while detection can be almost as effective as the dynamic approach. Experimental results indicate that the cross-layer detection outperforms existing malicious website detection techniques.", "title": "" }, { "docid": "15cfa9005e68973cbca60f076180b535", "text": "Much of the literature on fair classifiers considers the case of a single classifier used once, in isolation. We initiate the study of composition of fair classifiers. In particular, we address the pitfalls of näıve composition and give general constructions for fair composition. Focusing on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], we also extend our results to a large class of group fairness definitions popular in the recent literature. We exhibit several cases in which group fairness definitions give misleading signals under composition and conclude that additional context is needed to evaluate both group and individual fairness under composition.", "title": "" }, { "docid": "1df4fad2d5448364834608f4bc9d10a0", "text": "What causes adolescents to be materialistic? Prior research shows parents and peers are an important influence. Researchers have viewed parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents. We take a different approach, viewing parents and peers as important sources of emotional support and psychological well-being, which increase self-esteem in adolescents. Supportive parents and peers boost adolescents' self-esteem, which decreases their need to turn to material goods to develop positive selfperceptions. In a study with 12–18 year-olds, we find support for our view that self-esteem mediates the relationship between parent/peer influence and adolescent materialism. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. Rising levels of materialism among adolescents have raised concerns among parents, educators, and consumer advocates.More than half of 9–14 year-olds agree that, “when you grow up, the more money you have, the happier you are,” and over 60% agree that, “the only kind of job I want when I grow up is one that getsme a lot of money” (Goldberg, Gorn, Peracchio, & Bamossy, 2003). These trends have lead social scientists to conclude that adolescents today are “...the most brand-oriented, consumer-involved, and materialistic generation in history” (Schor, 2004, p. 13). What causes adolescents to bematerialistic? Themost consistent finding to date is that adolescent materialism is related to the interpersonal influences in their lives—notably, parents and peers. The vast majority of research is based on a social influence perspective, viewing parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents through modeling, reinforcement, and social interaction. In early research, Churchill and Moschis (1979) proposed that adolescents learn rational aspects of consumption from their parents and social aspects of consumption (materialism) from their peers. Moore and ⁎ Corresponding author. Villanova School of Business, 800 Lancaster Avenue, Villanova, PA 19085, USA. Fax: +1 520 621 7483. E-mail addresses: chaplin@eller.arizona.edu (L.N. Chaplin), djohn@umn.edu (D.R. John). 1057-7408/$ see front matter © 2010 Society for Consumer Psychology. Publish doi:10.1016/j.jcps.2010.02.002 Moschis (1981) examined family communication styles, suggesting that certain styles (socio-oriented) promote conformity to others' views, setting the stage for materialism. In later work, Goldberg et al. (2003) posited that parents transmit materialistic values to their offspring by modeling these values. Researchers have also reported positive correlations betweenmaterialism and socio-oriented family communication (Moore & Moschis, 1981), parents' materialism (Flouri, 2004; Goldberg et al., 2003), peer communication about consumption (Churchill & Moschis, 1979; Moschis & Churchill, 1978), and susceptibility to peer influence (Achenreiner, 1997; Banerjee & Dittmar, 2008; Roberts, Manolis, & Tanner, 2008). We take a different approach. Instead of viewing parents and peers as socialization agents that transmit consumption attitudes and values, we consider parents and peers as important sources of emotional support and psychological well-being, which lay the foundation for self-esteem in adolescents. We argue that supportive parents and peers boost adolescents' self-esteem, which decreases their need to embrace material goods as a way to develop positive self-perceptions. Prior research is suggestive of our perspective. In studies with young adults, researchers have found a link between (1) lower parental support (cold and controlling mothers) and a focus on financial success aspirations (Kasser, Ryan, Zax, & Sameroff, 1995: 18 year-olds) and (2) lower parental support (less affection and supervision) in ed by Elsevier Inc. All rights reserved. 1 Support refers to warmth, affection, nurturance, and acceptance (Becker, 1981; Ellis, Thomas, and Rollins, 1976). Parental nurturance involves the development of caring relationships, in which parents reason with their children about moral conflicts, involve them in family decision making, and set high moral expectations (Maccoby, 1984; Staub, 1988). 177 L.N. Chaplin, D.R. John / Journal of Consumer Psychology 20 (2010) 176–184 divorced families and materialism (Rindfleisch, Burroughs, & Denton, 1997: 20–32 year-olds). These studies do not focus on adolescents, do not examine peer factors, nor do they include measures of self-esteem or self-worth. But, they do suggest that parents and peers can influence materialism in ways other than transmitting consumption attitudes and values, which has been the focus of prior research on adolescent materialism. In this article, we seek preliminary evidence for our view by testing whether self-esteem mediates the relationship between parent/peer influence and adolescent materialism. We include parent and peer factors that inhibit or encourage adolescent materialism, which allows us to test self-esteem as a mediator under both conditions. For parental influence, we include parental support (inhibits materialism) and parents' materialism (encourages materialism). Both factors have appeared in prior materialism studies, but our interest here is whether self-esteem is a mediator of their influence on materialism. For peer influence, we include peer support (inhibits materialism) and peers' materialism (encourages materialism), with our interest being whether self-esteem is a mediator of their influence on materialism. These peer factors are new to materialism research and offer potentially new insights. Contrary to prior materialism research, which views peers as encouraging materialism among adolescents, we also consider the possibility that peers may be a positive influence by providing emotional support in the same way that parents do. Our research offers several contributions to understanding materialism in adolescents. First, we provide a broader perspective on the role of parents and peers as influences on adolescent materialism. The social influence perspective, which views parents and peers as transmitting consumption attitudes and values, has dominated materialism research with children and adolescents since its early days. We provide a broader perspective by considering parents and peers as much more than socialization agents—they contribute heavily to the sense of self-esteem that adolescents possess, which influences materialism. Second, our perspective provides a process explanation for why parents and peers influence materialism that can be empirically tested. Prior research offers a valuable set of findings about what factors correlate with adolescent materialism, but the process responsible for the correlation is left untested. Finally, we provide a parsimonious explanation for why different factors related to parent and peer influence affect adolescent materialism. Although the number of potential parent and peer factors is large, it is possible that there is a common thread (self-esteem) for why these factors influence adolescent materialism. Isolating mediators, such as selfesteem, could provide the basis for developing a conceptual framework to tie together findings across prior studies with different factors, providing a more unified explanation for why certain adolescents are more vulnerable to materialism.", "title": "" }, { "docid": "7c106fc6fc05ec2d35b89a1dec8e2ca2", "text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.", "title": "" }, { "docid": "b15c689ff3dd7b2e7e2149e73b5451ac", "text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.", "title": "" }, { "docid": "e18b08d7f7895339b432a9f9faf5a923", "text": "We present a parallelized navigation architecture that is capable of running in real-time and incorporating long-term loop closure constraints while producing the optimal Bayesian solution. This architecture splits the inference problem into a low-latency update that incorporates new measurements using just the most recent states (filter), and a high-latency update that is capable of closing long loops and smooths using all past states (smoother). This architecture employs the probabilistic graphical models of Factor Graphs, which allows the low-latency inference and highlatency inference to be viewed as sub-operations of a single optimization performed within a single graphical model. A specific factorization of the full joint density is employed that allows the different inference operations to be performed asynchronously while still recovering the optimal solution produced by a full batch optimization. Due to the real-time, asynchronous nature of this algorithm, updates to the state estimates from the highlatency smoother will naturally be delayed until the smoother calculations have completed. This architecture has been tested within a simulated aerial environment and on real data collected from an autonomous ground vehicle. In all cases, the concurrent architecture is shown to recover the full batch solution, even while updated state estimates are produced in real-time.", "title": "" }, { "docid": "d8af86876a53cdafc8973b9e78838ca7", "text": "Preferred walking speed (PWS) reflects the integrated performance of the relevant physiological sub-systems, including energy expenditure. It remains unclear whether the PWS during over-ground walking is chosen to optimize one's balance control because studies on the effects of speed on the body's balance control have been limited. The current study aimed to bridge the gap by quantifying the effects of the walking speed on the body's center of mass (COM) motion relative to the center of pressure (COP) in terms of the changes and directness of the COM-COP inclination angle (IA) and its rate of change (RCIA). Data of the COM and COP were measured from fifteen young healthy males at three walking speeds including PWS using a motion capture system. The values of IAs and RCIAs at key gait events and their average values over gait phases were compared between speeds using one-way repeated measures ANOVA. With increasing walking speed, most of the IA and RCIA related variables were significantly increased (p<0.05) but not for those of the frontal IA. Significant quadratic trends (p<0.05) with highest directness at PWS were found in IA during single-limb support, and in RCIA during single-limb and double-limb support. The results suggest that walking at PWS corresponded to the COM-COP control maximizing the directness of the RCIAs over the gait cycle, a compromise between the effects of walking speed and the speed of weight transfer. The data of IA and RCIA at PWS may be used in future assessment of balance control ability in people with different levels of balance impairments.", "title": "" }, { "docid": "ba4fb2947987c87a5103616d4bc138de", "text": "In intelligent tutoring systems with natural language dialogue, speech act classification, the task of detecting learners’ intentions, informs the system’s response mechanism. In this paper, we propose supervised machine learning models for speech act classification in the context of an online collaborative learning game environment. We explore the role of context (i.e. speech acts of previous utterances) for speech act classification. We compare speech act classification models trained and tested with contextual and non-contextual features (contents of the current utterance). The accuracy of the proposed models is high. A surprising finding is the modest role of context in automatically predicting the speech acts.", "title": "" }, { "docid": "116294113ff20558d3bcb297950f6d63", "text": "This paper aims to analyze the influence of a Halbach array by using a semi analytical design optimization approach on a novel electrical machine design with slotless air gap winding. The useable magnetic flux density caused by the Halbach array magnetization is studied and compared to conventional radial magnetization systems. First, several discrete magnetic flux densities are analyzed for an infinitesimal wire size in an air gap range from 0.1 mm to 5 mm by the finite element method in Ansys Maxwell. Fourier analysis is used to approximate continuous functions for each magnetic flux density characteristic for each air gap height. Then, using a six-step commutation control, the magnetic flux acting on a certain phase geometry is considered for a parametric machine model. The design optimization approach utilizes the design freedom of the magnetic flux density shape in air gap as well as the heights and depths of all magnetic circuit components, which are stator and rotor cores, permanent magnets, air gap, and air gap winding. Use of a nonlinear optimization formulation, allows for fast and precise analytical calculation of objective function. In this way the influence of both magnetizations on Pareto optimal machine design sets, when mass and efficiency are weighted, are compared. Other design requirements, such as torque, current, air gap and wire height, are considered via constraints on this optimization. Finally, an optimal motor design study for the Halbach array magnetization pattern is compared to the conventional radial magnetization. As a reference design, an existing 15-inch rim wheel-hub motor with air gap winding is used.", "title": "" }, { "docid": "8a8dd829c9b7ce0c46ef1fd0736cc006", "text": "In this paper, we introduce a generic inference hybrid framework for Convolutional Recurrent Neural Network (conv-RNN) of semantic modeling of text, seamless integrating the merits on extracting different aspects of linguistic information from both convolutional and recurrent neural network structures and thus strengthening the semantic understanding power of the new framework. Besides, based on conv-RNN, we also propose a novel sentence classification model and an attention based answer selection model with strengthening power for the sentence matching and classification respectively. We validate the proposed models on a very wide variety of data sets, including two challenging tasks of answer selection (AS) and five benchmark datasets for sentence classification (SC). To the best of our knowledge, it is by far the most complete comparison results in both AS and SC. We empirically show superior performances of conv-RNN in these different challenging tasks and benchmark datasets and also summarize insights on the performances of other state-of-the-arts methodologies.", "title": "" }, { "docid": "98f811a1b5445763505009684ef1d160", "text": "This study examined the relationship between three of the ‘‘Big Five’’ traits (neuroticism, extraversion, and openness), self-esteem, loneliness and narcissism, and Facebook use. Participants were 393 first year undergraduate psychology students from a medium-sized Australian university who completed an online questionnaire. Negative binomial regression models showed that students with higher openness levels reported spending more time on Facebook and having more friends on Facebook. Interestingly, students with higher levels of loneliness reported having more Facebook friends. Extraversion, neuroticism, selfesteem and narcissism did not have significant associations with Facebook use. It was concluded that students who are high in openness use Facebook to connect with others in order to discuss a wide range of interests, whereas students who are high in loneliness use the site to compensate for their lack of offline", "title": "" }, { "docid": "40e129b6264892f1090fd9a8d6a9c1ae", "text": "We introduce an algorithm for text detection and localization (\"spotting\") that is computationally efficient and produces state-of-the-art results. Our system uses multi-channel MSERs to detect a large number of promising regions, then subsamples these regions using a clustering approach. Representatives of region clusters are binarized and then passed on to a deep network. A final line grouping stage forms word-level segments. On the ICDAR 2011 and 2015 benchmarks, our algorithm obtains an F-score of 82% and 83%, respectively, at a computational cost of 1.2 seconds per frame. We also introduce a version that is three times as fast, with only a slight reduction in performance.", "title": "" } ]
scidocsrr
33de766f28a69a864ecd5ce970baf882
Enabling Low-Latency Applications in Fog-Radio Access Networks
[ { "docid": "335a330d7c02f13c0f50823461f4e86f", "text": "Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources-the transmit precoding matrices of the MUs-and the computational resources-the CPU cycles/second assigned by the cloud to each MU-in order to minimize the overall users' energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.", "title": "" } ]
[ { "docid": "b544aec3db71397c3b81851e8d770fda", "text": "A novel substrate integrated waveguide (SIW) slot antenna having folded corrugated stubs is proposed for suppressing the backlobes of the SIW slot antenna associated with the diffraction of the spillover current. The longitudinal array of the folded stubs replacing the SIW via-holes effectively prevents the propagation of the surface spillover current. The measured front-to-back ratio (FTBR) has been greatly (15 dB) improved from that of the common SIW slot antenna. We expect that the proposed folded corrugated SIW (FCSIW) slot antenna plays an important role for reducing the excessive backside radiation of the SIW slot antenna and for decreasing mutual coupling in SIW slot antenna arrays.", "title": "" }, { "docid": "dba1d0b9a2c409bd6ff9c39cbdb1e7ed", "text": "Recent research suggests that social interactions in video games may lead to the development of community bonding and prosocial attitudes. Building on this line of research, a national survey of U.S. adults finds that gamers who develop ties with a community of fellow gamers possess gaming social capital, a new gaming-related community construct that is shown to be a positive antecedent in predicting both face-to-face social capital and civic participation.", "title": "" }, { "docid": "a04302721f62c1af3b9be630524f03ab", "text": "Hyperspectral image processing has been a very dynamic area in remote sensing and other applications in recent years. Hyperspectral images provide ample spectral information to identify and distinguish spectrally similar materials for more accurate and detailed information extraction. Wide range of advanced classification techniques are available based on spectral information and spatial information. To improve classification accuracy it is essential to identify and reduce uncertainties in image processing chain. This paper presents the current practices, problems and prospects of hyperspectral image classification. In addition, some important issues affecting classification performance are discussed.", "title": "" }, { "docid": "9ed3b0144df3dfa88b9bfa61ee31f40a", "text": "OBJECTIVE\nTo determine the frequency of early relapse after achieving good initial correction in children who were on clubfoot abduction brace.\n\n\nMETHODS\nThe cross-sectional study was conducted at the Jinnah Postgraduate Medical Centre, Karachi, and included parents of children of either gender in the age range of 6 months to 3years with idiopathic clubfoot deformities who had undergone Ponseti treatment between September 2012 and June 2013, and who were on maintenance brace when the data was collected from December 2013 to March 2014. Parents of patients with follow-up duration in brace less than six months and those with syndromic clubfoot deformity were excluded. The interviews were taken through a purposive designed questionnaire. SPSS 16 was used for data analysis.\n\n\nRESULTS\nThe study included parents of 120 patients. Of them, 95(79.2%) behaved with good compliance on Denis Browne Splint, 10(8.3%) were fair and 15(12.5%)showed poor compliance. Major reason for poor and non-compliance was unaffordability of time and cost for regular follow-up. Besides, 20(16.67%) had inconsistent use due to delay inre-procurement of Foot Abduction Braceonce the child had outgrown the shoe. Only 4(3.33%) talked of cultural barriers and conflict of interest between the parents. Early relapse was observed in 23(19.16%) patients and 6(5%) of them responded to additional treatment and were put back on brace treatment; 13(10.83%) had minor relapse with forefoot varus, without functional disability, and the remaining 4(3.33%) had major relapse requiring extensive surgery. Overall success was recorded in 116(96.67%) cases.\n\n\nCONCLUSIONS\nThe positioning of shoes on abduction brace bar, comfort in shoes, affordability, initial and subsequent delay in procurement of new shoes once the child's feet overgrew the shoe, were the four containable factors on the part of Ponseti practitioner.", "title": "" }, { "docid": "ff02ddb759f94367813324ce15f09f8d", "text": "The present work describes a website designed for remote teaching of optical measurements using lasers. It enables senior undergraduate and postgraduate students to learn theoretical aspects of the subject and also have a means to perform experiments for better understanding of the application at hand. At this stage of web development, optical methods considered are those based on refractive index changes in the material medium. The website is specially designed in order to provide remote access of expensive lasers, cameras, and other laboratory instruments by employing a commercially available web browser. The web suite integrates remote experiments, hands-on experiments and life-like optical images generated by using numerical simulation techniques based on Open Foam software package. The remote experiments are real time experiments running in the physical laboratory but can be accessed remotely from anywhere in the world and at any time. Numerical simulation of problems enhances learning, visualization of problems and interpretation of results. In the present work hand-on experimental results are discussed with respect to simulated results. A reasonable amount of resource material, specifically theoretical background of interferometry is available on the website along with computer programs image processing and analysis of results obtained in an experiment.", "title": "" }, { "docid": "123b93071e0ae555734c0ab27d29b6bf", "text": "Computer-Assisted Pronunciation Training System (CAPT) has become an important learning aid in second language (L2) learning. Our approach to CAPT is based on the use of phonological rules to capture language transfer effects that may cause mispronunciations. This paper presents an approach for automatic derivation of phonological rules from L2 speech. The rules are used to generate an extended recognition network (ERN) that captures the canonical pronunciations of words, as well as the possible mispronunciations. The ERN is used with automatic speech recognition for mispronunciation detection. Experimentation with an L2 speech corpus that contains recordings from 100 speakers aims to compare the automatically derived rules with manually authored rules. Comparable performance is achieved in mispronunciation detection (i.e. telling which phone is wrong). The automatically derived rules also offer improved performance in diagnostic accuracy (i.e. identify how the phone is wrong).", "title": "" }, { "docid": "d245fbc12d9a7d36751e3b75d9eb0e62", "text": "What makes for an explanation of \"black box\" AI systems such as Deep Nets? We reviewed the pertinent literatures on explanation and derived key ideas. This set the stage for our empirical inquiries, which include conceptual cognitive modeling, the analysis of a corpus of cases of \"naturalistic explanation\" of computational systems, computational cognitive modeling, and the development of measures for performance evaluation. The purpose of our work is to contribute to the program of research on “Explainable AI.” In this report we focus on our initial synthetic modeling activities and the development of measures for the evaluation of explainability in human-machine work systems. INTRODUCTION The importance of explanation in AI has been emphasized in the popular press, with considerable discussion of the explainability of Deep Nets and Machine Learning systems (e.g., Kuang, 2017). For such “black box” systems, there is a need to explain how they work so that users and decision makers can develop appropriate trust and reliance. As an example, referencing Figure 1, a Deep Net that we created was trained to recognize types of tools. Figure 1. Some examples of Deep Net classification. Outlining the axe and overlaying bird silhouettes on it resulted in a confident misclassification. While a fuzzy hammer is correctly classified, an embossed rendering is classified as a saw. Deep Nets can classify with high hit rates for images that fall within the variation of their training sets, but are nonetheless easily spoofed using instances that humans find easy to classify. Furthermore, Deep Nets have to provide some classification for an input. Thus, a Volkswagen might be classified as a tulip by a Deep Net trained to recognize types of flowers. So, if Deep Nets do not actually possess human-semantic concepts (e.g., that axes have things that humans call \"blades\"), what do the Deep Nets actually \"see\"? And more directly, how can users be enabled to develop appropriate trust and reliance on these AI systems? Articles in the popular press highlight the successes of Deep Nets (e.g., the discovery of planetary systems in Hubble Telescope data; Temming 2018), and promise diverse applications \"... the recognition of faces, handwriting, speech... navigation and control of autonomous vehicles... it seems that neural networks are being used everywhere\" (Lucky, 2018, p. 24). And yet \"models are more complex and less interpretable than ever... Justifying [their] decisions will only become more crucial\" (Biran and Cotton, 2017, p. 4). Indeed, a proposed regulation before the European Union (Goodman and Flaxman, 2016) asserts that users have the \"right to an explanation.” What form must an explanation for Deep Nets take? This is a challenge in the DARPA \"Explainable AI\" (XAI) Program: To develop AI systems that can engage users in a process in which the mechanisms and \"decisions\" of the AI are explained. Our tasks on the Program are to: (1). Integrate philosophical studies and psychological research in order to identify consensus points, key concepts and key variables of explanatory reasoning, (2). Develop and validate measures of explanation goodness, explanation satisfaction, mental models and human-XAI performance, (3) Develop and evaluate a computational model of how people understand computational devices, and C op yr ig ht 2 01 8 by H um an F ac to rs a nd E rg on om ic s So ci et y. D O I 1 0. 11 77 /1 54 19 31 21 86 21 04 7 Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting 197", "title": "" }, { "docid": "76dd7060fdbf9927495985dd5313896f", "text": "Many network solutions and overlay networks utilize probabilistic techniques to reduce information processing and networking costs. This survey article presents a number of frequently used and useful probabilistic techniques. Bloom filters and their variants are of prime importance, and they are heavily used in various distributed systems. This has been reflected in recent research and many new algorithms have been proposed for distributed systems that are either directly or indirectly based on Bloom filters. In this survey, we give an overview of the basic and advanced techniques, reviewing over 20 variants and discussing their application in distributed systems, in particular for caching, peer-to-peer systems, routing and forwarding, and measurement data summarization.", "title": "" }, { "docid": "fe05cc4e31effca11e2718ce05635a97", "text": "In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker’s knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.", "title": "" }, { "docid": "b6d856bf3b61883e3755cf00810b98c7", "text": "The development of cell printing is vital for establishing biofabrication approaches as clinically relevant tools. Achieving this requires bio-inks which must not only be easily printable, but also allow controllable and reproducible printing of cells. This review outlines the general principles and current progress and compares the advantages and challenges for the most widely used biofabrication techniques for printing cells: extrusion, laser, microvalve, inkjet and tissue fragment printing. It is expected that significant advances in cell printing will result from synergistic combinations of these techniques and lead to optimised resolution, throughput and the overall complexity of printed constructs.", "title": "" }, { "docid": "8bdbf6fc33bc0b2cb5911683c13912a0", "text": "The breaking of solid objects, like glass or pottery, poses a complex problem for computer animation. We present our methods of using physical simulation to drive the animation of breaking objects. Breakage is obtaned in a three-dimensional flexible model as the limit of elastic behavior. This article describes three principal features of the model: a breakage model, a collision-detection/response scheme, and a geometric modeling method. We use networks of point masses connected by springs to represent physical objects that can bend and break. We present effecient collision-detection algorithms, appropriate for simulating the collisions between the various pieces that interact in breakage. The capability of modeling real objects is provided by a technique of building up composite structures from simple lattice models. We applied these methods to animate the breaking of a teapot and other dishware activities in the animationTipsy Turvy shown at Siggraph '89. Animation techniques that rely on physical simulation to control the motion of objects are discussed, and further topics for research are presented.", "title": "" }, { "docid": "8a24f9d284507765e0026ae8a70fc482", "text": "The diagnosis of pulmonary tuberculosis in patients with Human Immunodeficiency Virus (HIV) is complicated by the increased presence of sputum smear negative tuberculosis. Diagnosis of smear negative pulmonary tuberculosis is made by an algorithm recommended by the National Tuberculosis and Leprosy Programme that uses symptoms, signs and laboratory results. The objective of this study is to determine the sensitivity and specificity of the tuberculosis treatment algorithm used for the diagnosis of sputum smear negative pulmonary tuberculosis. A cross-section study with prospective enrollment of patients was conducted in Dar-es-Salaam Tanzania. For patients with sputum smear negative, sputum was sent for culture. All consenting recruited patients were counseled and tested for HIV. Patients were evaluated using the National Tuberculosis and Leprosy Programme guidelines and those fulfilling the criteria of having active pulmonary tuberculosis were started on anti tuberculosis therapy. Remaining patients were provided appropriate therapy. A chest X-ray, mantoux test, and Full Blood Picture were done for each patient. The sensitivity and specificity of the recommended algorithm was calculated. Predictors of sputum culture positive were determined using multivariate analysis. During the study, 467 subjects were enrolled. Of those, 318 (68.1%) were HIV positive, 127 (27.2%) had sputum culture positive for Mycobacteria Tuberculosis, of whom 66 (51.9%) were correctly treated with anti-Tuberculosis drugs and 61 (48.1%) were missed and did not get anti-Tuberculosis drugs. Of the 286 subjects with sputum culture negative, 107 (37.4%) were incorrectly treated with anti-Tuberculosis drugs. The diagnostic algorithm for smear negative pulmonary tuberculosis had a sensitivity and specificity of 38.1% and 74.5% respectively. The presence of a dry cough, a high respiratory rate, a low eosinophil count, a mixed type of anaemia and presence of a cavity were found to be predictive of smear negative but culture positive pulmonary tuberculosis. The current practices of establishing pulmonary tuberculosis diagnosis are not sensitive and specific enough to establish the diagnosis of Acid Fast Bacilli smear negative pulmonary tuberculosis and over treat people with no pulmonary tuberculosis.", "title": "" }, { "docid": "c19844950a3531d152408fd05904772b", "text": "Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character level language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to 1.19 and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.", "title": "" }, { "docid": "48774da3dd848f6e7dc0b63fdf89694e", "text": "Near Field Communication (NFC) offers intuitive interactions between humans and vehicles. In this paper we explore different NFC based use cases in an automotive context. Nearly all described use cases have been implemented in a BMW vehicle to get experiences of NFC in a real in-car environment. We describe the underlying soft- and hardware architecture and our experiences in setting up the prototype.", "title": "" }, { "docid": "9201964dfef74396dabb6bd2a3effee3", "text": "A MATLAB program was developed to invert first arrival travel time picks from zero offset profiling borehole ground penetrating radar traces to obtain the electromagnetic wave propagation velocities in soil. Zero-offset profiling refers to a mode of operation wherein the centers of the bistatic antennae being lowered to the same depth below ground for each measurement. The inversion uses a simulated annealing optimization routine, whereby the model attempts to reduce the root mean square error between the measured and modeled travel time by perturbing the velocity in a ray tracing routine. Measurement uncertainty is incorporated through the presentation of the ensemble mean and standard deviation from the results of a Monte Carlo simulation. The program features a pre-processor to modify or delete travel time information from the profile before inversion and post-processing through presentation of the ensemble statistics of the water contents inferred from the velocity profile. The program includes a novel application of a graphical user interface to animate the velocity fitting routine. r 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "77e385b7e7305ec0553c980f22bfa3b4", "text": "Two and three-dimensional simulations of experiments on atmosphere mixing and stratification in a nuclear power plant containment were performed with the code CFX4.4, with the inclusion of simple models for steam condensation. The purpose was to assess the applicability of the approach to simulate the behaviour of light gases in containments at accident conditions. The comparisons of experimental and simulated results show that, despite a tendency to simulate more intensive mixing, the proposed approach may replicate the non-homogeneous structure of the atmosphere reasonably well. Introduction One of the nuclear reactor safety issues that have lately been considered using Computational Fluid Dynamics (CFD) codes is the problem of predicting the eventual non-homogeneous concentration of light flammable gas (hydrogen) in the containment of a nuclear power plant (NPP) at accident conditions. During a hypothetical severe accident in a Pressurized Water Reactor NPP, hydrogen could be generated due to Zircaloy oxidation in the reactor core. Eventual high concentrations of hydrogen in some parts of the containment could cause hydrogen ignition and combustion, which could threaten the containment integrity. The purpose of theoretical investigations is to predict hydrogen behaviour at accident conditions prior to combustion. In the past few years, many investigations about the possible application of CFD codes for this purpose have been started [1-5]. CFD codes solve the transport mass, momentum and energy equations when a fluid system is modelled using local instantaneous description. Some codes, which also use local instantaneous description, have been developed specifically for nuclear applications [68]. Although many CFD codes are multi-purpose, some of them still lack some models, which are necessary for adequate simulations of containment phenomena. In particular, the modelling of steam condensation often has to be incorporated in the codes by the users. These theoretical investigations are complemented by adequate experiments. Recently, the following novel integral experimental facilities have been set up in Europe: TOSQAN [9,10], at the Institut de Radioprotection et de Sureté Nucléaire (IRSN) in Saclay (France), MISTRA [9,11], at the", "title": "" }, { "docid": "86f25f09b801d28ce32f1257a39ddd44", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "title": "" }, { "docid": "c3b88e3ff4c4f8932892b5692e4b10eb", "text": "Medicine is an extremely challenging field of research, which has been more than any other discipline of fundamental importance for human existence. The variety and inherent complexity of unsolved problems has made it a major driving force for many natural and engineering sciences. Hence, from the early days of Computer Graphics and Computer Vision the medical field has been one of most important application areas with an enduring provision of fascinating research challenges. Conversely, individual Graphics and Computer Vision tools and methods have become increasingly irreplaceable in modern medicine. In this article I will present my personal view of the interdisciplinary field of surgery simulation which encompasses many different disciplines including Medicine, Computer Graphics, Computer Vision, Mechanics, Material Sciences, Robotics and Numeric Analysis. I will discuss the individual tasks, challenges and problems arising during the design and implementation of advanced surgery simulation environments, where my emphasis is directed towards the roles of graphics and vision.", "title": "" }, { "docid": "69fd3e6e9a1fc407d20b0fb19fc536e3", "text": "In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI facial expression database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a Web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date.", "title": "" }, { "docid": "f01a19652bff88923a3141fb56d805e2", "text": "This paper presents a visible light communication system, focusing mostly on the aspects related with the hardware design and implementation. The designed system is aimed to ensure a highly-reliable communication between a commercial LED-based traffic light and a receiver mounted on a vehicle. Enabling wireless data transfer between the road infrastructure and vehicles has the potential to significantly increase the safety and efficiency of the transportation system. The paper presents the advantages of the proposed system and explains same of the choices made in the implementation process.", "title": "" } ]
scidocsrr
f34ced239c02a52fbb771d042910a58d
Big data emerging technologies: A CaseStudy with analyzing twitter data using apache hive
[ { "docid": "298b65526920c7a094f009884439f3e4", "text": "Big Data concerns massive, heterogeneous, autonomous sources with distributed and decentralized control. These characteristics make it an extreme challenge for organizations using traditional data management mechanism to store and process these huge datasets. It is required to define a new paradigm and re-evaluate current system to manage and process Big Data. In this paper, the important characteristics, issues and challenges related to Big Data management has been explored. Various open source Big Data analytics frameworks that deal with Big Data analytics workloads have been discussed. Comparative study between the given frameworks and suitability of the same has been proposed.", "title": "" }, { "docid": "e9aac361f8ca1bb8f10409859aef718d", "text": "MapReduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Hadoop-an open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data-locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this paper, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a dataintensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.", "title": "" } ]
[ { "docid": "07ef9eece7de49ee714d4a2adf9bb078", "text": "Vegetable oil has been proven to be advantageous as a non-toxic, cost-effective and biodegradable solvent to extract polycyclic aromatic hydrocarbons (PAHs) from contaminated soils for remediation purposes. The resulting vegetable oil contained PAHs and therefore required a method for subsequent removal of extracted PAHs and reuse of the oil in remediation processes. In this paper, activated carbon adsorption of PAHs from vegetable oil used in soil remediation was assessed to ascertain PAH contaminated oil regeneration. Vegetable oils, originating from lab scale remediation, with different PAH concentrations were examined to study the adsorption of PAHs on activated carbon. Batch adsorption tests were performed by shaking oil-activated carbon mixtures in flasks. Equilibrium data were fitted with the Langmuir and Freundlich isothermal models. Studies were also carried out using columns packed with activated carbon. In addition, the effects of initial PAH concentration and activated carbon dosage on sorption capacities were investigated. Results clearly revealed the effectiveness of using activated carbon as an adsorbent to remove PAHs from the vegetable oil. Adsorption equilibrium of PAHs on activated carbon from the vegetable oil was successfully evaluated by the Langmuir and Freundlich isotherms. The initial PAH concentrations and carbon dosage affected adsorption significantly. The results indicate that the reuse of vegetable oil was feasible.", "title": "" }, { "docid": "4b494016220eb5442642e34c3ed2d720", "text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.", "title": "" }, { "docid": "5d21df36697616719bcc3e0ee22a08bd", "text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics", "title": "" }, { "docid": "580e2f24b8b4a7564e132b87420fe7ad", "text": "Walking is a vital exercise for health promotion and a fundamental ability necessary for everyday life. In the authors' previous studies, an omni-directional walker was developed for walking rehabilitation. Walking training programs are stored in the walker and the walker must precisely follow the paths defined in the walking training programs to guarantee the effectiveness of rehabilitation. In the previous study, an adaptive control method has been proposed for path tracking of the walker considering a center of gravity shift and load change. In this paper simulations and running experiments are carried out to verify the proposed adaptive control method. First, the kinematics and the kinetics of the omni-directional walker motion are described. Second, the adaptive control strategy is presented. Finally, path tracking simulations and experiments are carried out using the proposed method. Comparing with the proportional-integral-derivative control (PID control), the simulation and experiment results demonstrate the feasibility and effectiveness of the adaptive control method.", "title": "" }, { "docid": "fb9c0650f5ac820eef3df65b7de1ff12", "text": "Since 2013, a number of studies have enhanced the literature and have guided clinicians on viable treatment interventions outside of pharmacotherapy and surgery. Thirty-three randomized controlled trials and one large observational study on exercise and physiotherapy were published in this period. Four randomized controlled trials focused on dance interventions, eight on treatment of cognition and behavior, two on occupational therapy, and two on speech and language therapy (the latter two specifically addressed dysphagia). Three randomized controlled trials focused on multidisciplinary care models, one study on telemedicine, and four studies on alternative interventions, including music therapy and mindfulness. These studies attest to the marked interest in these therapeutic approaches and the increasing evidence base that places nonpharmacological treatments firmly within the integrated repertoire of treatment options in Parkinson's disease.", "title": "" }, { "docid": "6bd18974879c8f38309e8ebb818c6ebf", "text": "Calcium (Ca(2+)) is a ubiquitous signaling molecule that accumulates in the cytoplasm in response to diverse classes of stimuli and, in turn, regulates many aspects of cell function. In neurons, Ca(2+) influx in response to action potentials or synaptic stimulation triggers neurotransmitter release, modulates ion channels, induces synaptic plasticity, and activates transcription. In this article, we discuss the factors that regulate Ca(2+) signaling in mammalian neurons with a particular focus on Ca(2+) signaling within dendritic spines. This includes consideration of the routes of entry and exit of Ca(2+), the cellular mechanisms that establish the temporal and spatial profile of Ca(2+) signaling, and the biophysical criteria that determine which downstream signals are activated when Ca(2+) accumulates in a spine. Furthermore, we also briefly discuss the technical advances that made possible the quantitative study of Ca(2+) signaling in dendritic spines.", "title": "" }, { "docid": "1969bf5a07349cc5a9b498e0437e41fe", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "55462ae5eeb747114dfda77d14519557", "text": "In an environment where supply chains compete against supply chains, information sharing among supply chain partners using information systems is a competitive tool. Supply chain ontology has been proposed as an important medium for attaining information systems interoperability. Ontology has its origin in philosophy, and the computing community has adopted ontology in its language. This paper presents a study of state of the art research in supply chain ontology and identifies the outstanding research gaps. Six supply chain ontology models were identified from a systematic review of literature. A seven point comparison framework was developed to consider the underlying concepts as well as application of the ontology models. The comparison results were then synthesised into nine gaps to inform future supply chain ontology research. This work is a rigorous and systematic attempt to identify and synthesise the research in supply chain ontology. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6be44677f42b5a6aaaea352e11024cfa", "text": "In this paper, we intend to discuss if and in what sense semiosis (meaning process, cf. C.S. Peirce) can be regarded as an “emergent” process in semiotic systems. It is not our problem here to answer when or how semiosis emerged in nature. As a prerequisite for the very formulation of these problems, we are rather interested in discussing the conditions which should be fulfilled for semiosis to be characterized as an emergent process. The first step in this work is to summarize a systematic analysis of the variety of emergence theories and concepts, elaborated by Achim Stephan. Along the summary of this analysis, we pose fundamental questions that have to be answered in order to ascribe a precise meaning to the term “emergence” in the context of an understanding of semiosis. After discussing a model for explaining emergence based on Salthe’s hierarchical structuralism, which considers three levels at a time in a semiotic system, we present some tentative answers to those questions.", "title": "" }, { "docid": "36f2be7a14eeb10ad975aa00cfd30f36", "text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.", "title": "" }, { "docid": "4805f0548cb458b7fad623c07ab7176d", "text": "This paper presents a unified control framework for controlling a quadrotor tail-sitter UAV. The most salient feature of this framework is its capability of uniformly treating the hovering and forward flight, and enabling continuous transition between these two modes, depending on the commanded velocity. The key part of this framework is a nonlinear solver that solves for the proper attitude and thrust that produces the required acceleration set by the position controller in an online fashion. The planned attitude and thrust are then achieved by an inner attitude controller that is global asymptotically stable. To characterize the aircraft aerodynamics, a full envelope wind tunnel test is performed on the full-scale quadrotor tail-sitter UAV. In addition to planning the attitude and thrust required by the position controller, this framework can also be used to analyze the UAV's equilibrium state (trimmed condition), especially when wind gust is present. Finally, simulation results are presented to verify the controller's capacity, and experiments are conducted to show the attitude controller's performance.", "title": "" }, { "docid": "980950d8c5c7f5cda550b271d4e0d309", "text": "The paper presents an accurate analytical subdomain model for computation of the open-circuit magnetic field in surface-mounted permanent-magnet machines with any pole and slot combinations, including fractional slot machines, accounting for stator slotting effect. It is derived by solving the field governing equations in each simple and regular subdomain, i.e., magnet, air-gap and stator slots, and applying the boundary conditions to the interfaces between these subdomains. The model accurately accounts for the influence of interaction between slots, radial/parallel magnetization, internal/external rotor topologies, relative recoil permeability of magnets, and odd/even periodic boundary conditions. The back-electromotive force, electromagnetic torque, cogging torque, and unbalanced magnetic force are obtained based on the field model. The relationship between this accurate subdomain model and the conventional subdomain model, which is based on the simplified one slot per pole machine model, is also discussed. The investigation shows that the proposed accurate subdomain model has better accuracy than the subdomain model based on one slot/pole machine model. The finite element and experimental results validate the analytical prediction.", "title": "" }, { "docid": "90fe763855ca6c4fabe4f9d042d5c61a", "text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.", "title": "" }, { "docid": "91f3268092606d2bd1698096e32c824f", "text": "Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Taskoriented Dialogue Dataset shows that our framework significantly outperforms other sequenceto-sequence based baseline models on both automatic and human evaluation. Title and Abstract in Chinese 面向任务型对话中基于对话状态表示的序列到序列学习 面向任务型对话中,传统流水线模型要求对对话状态进行显式建模。这需要人工定义对 领域相关的知识库进行检索的动作空间。相反地,序列到序列模型可以直接学习从对话 历史到当前轮回复的一个映射,但其没有显式地进行知识库的检索。在本文中,我们提 出了一个结合传统流水线与序列到序列二者优点的模型。我们的模型将对话历史建模为 一组固定大小的分布式表示。基于这组表示,我们利用注意力机制对知识库进行检索。 在斯坦福多轮多领域对话数据集上的实验证明,我们的模型在自动评价与人工评价上优 于其他基于序列到序列的模型。", "title": "" }, { "docid": "3ce6c3b6a23e713bf9af419ce2d7ded3", "text": "Two measures of financial performance that are being applied increasingly in investor-owned and not-for-profit healthcare organizations are market value added (MVA) and economic value added (EVA). Unlike traditional profitability measures, both MVA and EVA measures take into account the cost of equity capital. MVA is most appropriate for investor-owned healthcare organizations and EVA is the best measure for not-for-profit organizations. As healthcare financial managers become more familiar with MVA and EVA and understand their potential, these two measures may become more widely accepted accounting tools for assessing the financial performance of investor-owned and not-for-profit healthcare organizations.", "title": "" }, { "docid": "17d0da8dd05d5cfb79a5f4de4449fcdd", "text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in", "title": "" }, { "docid": "e7d36dc01a3e20c3fb6d2b5245e46705", "text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.", "title": "" }, { "docid": "b5fe13becf36cdc699a083b732dc5d6a", "text": "The stability of two-dimensional, linear, discrete systems is examined using the 2-D matrix Lyapunov equation. While the existence of a positive definite solution pair to the 2-D Lyapunov equation is sufficient for stability, the paper proves that such existence is not necessary for stability, disproving a long-standing conjecture.", "title": "" }, { "docid": "f2af56bef7ae8c12910d125a3b729e6a", "text": "We investigate an important and challenging problem in summary generation, i.e., Evolutionary Trans-Temporal Summarization (ETTS), which generates news timelines from massive data on the Internet. ETTS greatly facilitates fast news browsing and knowledge comprehension, and hence is a necessity. Given the collection of time-stamped web documents related to the evolving news, ETTS aims to return news evolution along the timeline, consisting of individual but correlated summaries on each date. Existing summarization algorithms fail to utilize trans-temporal characteristics among these component summaries. We propose to model trans-temporal correlations among component summaries for timelines, using inter-date and intra-date sentence dependencies, and present a novel combination. We develop experimental systems to compare 5 rival algorithms on 6 instinctively different datasets which amount to 10251 documents. Evaluation results in ROUGE metrics indicate the effectiveness of the proposed approach based on trans-temporal information.", "title": "" }, { "docid": "cf15139b8f62d01f38f14d8fa09d3bd6", "text": "In reinforcement learning (RL) tasks, an efficient exploration mechanism should be able to encourage an agent to take actions that lead to less frequent states which may yield higher accumulative future return. However, both knowing about the future and evaluating the frequentness of states are non-trivial tasks, especially for deep RL domains, where a state is represented by high-dimensional image frames. In this paper, we propose a novel informed exploration framework for deep RL tasks, where we build the capability for a RL agent to predict over the future transitions and evaluate the frequentness for the predicted future frames in a meaningful manner. To this end, we train a deep prediction model to generate future frames given a state-action pair, and a convolutional autoencoder model to generate deep features for conducting hashing over the seen frames. In addition, to utilize the counts derived from the seen frames to evaluate the frequentness for the predicted frames, we tackle the challenge of making the hash codes for the predicted future frames to match with their corresponding seen frames. In this way, we could derive a reliable metric for evaluating the novelty of the future direction pointed by each action, and hence inform the agent to explore the least frequent one. We use Atari 2600 games as the testing environment and demonstrate that the proposed framework achieves significant performance gain over a state-of-the-art informed exploration approach in most of the domains.", "title": "" } ]
scidocsrr
a23a288df5f4228eedb94d26d84583bf
Quasi-Homography Warps in Image Stitching
[ { "docid": "b29947243b1ad21b0529a6dd8ef3c529", "text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.", "title": "" }, { "docid": "916fd932ae299b30f322aed6b5f35a9c", "text": "This paper proposes a novel parametric warp which is a spatial combination of a projective transformation and a similarity transformation. Given the projective transformation relating two input images, based on an analysis of the projective transformation, our method smoothly extrapolates the projective transformation of the overlapping regions into the non-overlapping regions and the resultant warp gradually changes from projective to similarity across the image. The proposed warp has the strengths of both projective and similarity warps. It provides good alignment accuracy as projective warps while preserving the perspective of individual image as similarity warps. It can also be combined with more advanced local-warp-based alignment methods such as the as-projective-as-possible warp for better alignment accuracy. With the proposed warp, the field of view can be extended by stitching images with less projective distortion (stretched shapes and enlarged sizes).", "title": "" } ]
[ { "docid": "3531efcf8308541b0187b2ea4ab91721", "text": "This paper proposed a novel controlling technique of pulse width modulation (PWM) mode and pulse frequency modulation (PFM) mode to keep the high efficiency within width range of loading. The novel control method is using PWM and PFM detector to achieve two modes switching appropriately. The controlling technique can make the efficiency of current mode DC-DC buck converter up to 88% at light loading and this paper is implemented by TSMC 0.35 mum CMOS process.", "title": "" }, { "docid": "7990aa405f43f6e176bd25f150a58307", "text": "The human skin is a promising surface for input to computing devices but differs fundamentally from existing touch-sensitive devices. The authors propose the use of skin landmarks, which offer unique tactile and visual cues, to enhance body-based user interfaces.", "title": "" }, { "docid": "3ea6de664a7ac43a1602b03b46790f0a", "text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.", "title": "" }, { "docid": "0b4a107c825a095573ecded075b77b51", "text": "Primary Argument Nursing has a rich heritage of advocating for a healthy society established on a foundation of social justice. The future legitimacy and success of public health nursing depends on recognising and appropriately addressing the social, economic and political determinants of health in the populations served. There is an incontrovertible association between population health status, absolute income levels and income inequality. Thus, along with other social determinants of health, income differentials within populations must be a fundamental consideration when planning and delivering nursing services. Ensuring that federal and state health policy explicitly addresses this key issue remains an important challenge for the nursing profession, the public health system and the Australian community.", "title": "" }, { "docid": "5466fef2418d06ac195f4165103d0472", "text": "Research suggests that select processing speed measures can also serve as embedded validity indicators (EVIs). The present study examined the diagnostic utility of Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests as EVIs in a mixed clinical sample of 205 patients medically referred for neuropsychological assessment (53.3% female, mean age = 45.1). Classification accuracy was calculated against 3 composite measures of performance validity as criterion variables. A PSI ≤79 produced a good combination of sensitivity (.23-.56) and specificity (.92-.98). A Coding scaled score ≤5 resulted in good specificity (.94-1.00), but low and variable sensitivity (.04-.28). A Symbol Search scaled score ≤6 achieved a good balance between sensitivity (.38-.64) and specificity (.88-.93). A Coding-Symbol Search scaled score difference ≥5 produced adequate specificity (.89-.91) but consistently low sensitivity (.08-.12). A 2-tailed cutoff on the Coding/Symbol Search raw score ratio (≤1.41 or ≥3.57) produced acceptable specificity (.87-.93), but low sensitivity (.15-.24). Failing ≥2 of these EVIs produced variable specificity (.81-.93) and sensitivity (.31-.59). Failing ≥3 of these EVIs stabilized specificity (.89-.94) at a small cost to sensitivity (.23-.53). Results suggest that processing speed based EVIs have the potential to provide a cost-effective and expedient method for evaluating the validity of cognitive data. Given their generally low and variable sensitivity, however, they should not be used in isolation to determine the credibility of a given response set. They also produced unacceptably high rates of false positive errors in patients with moderate-to-severe head injury. Combining evidence from multiple EVIs has the potential to improve overall classification accuracy. (PsycINFO Database Record", "title": "" }, { "docid": "6788bfdd287778ac8c600ee94a0b2a9c", "text": "The predominant approach to Visual Question Answering (VQA) demands that the model represents within its weights all of the information required to answer any question about any image. Learning this information from any real training set seems unlikely, and representing it in a reasonable number of weights doubly so. We propose instead to approach VQA as a meta learning task, thus separating the question answering method from the information required. At test time, the method is provided with a support set of example questions/answers, over which it reasons to resolve the given question. The support set is not fixed and can be extended without retraining, thereby expanding the capabilities of the model. To exploit this dynamically provided information, we adapt a state-of-the-art VQA model with two techniques from the recent meta learning literature, namely prototypical networks and meta networks. Experiments demonstrate the capability of the system to learn to produce completely novel answers (i.e. never seen during training) from examples provided at test time. In comparison to the existing state of the art, the proposed method produces qualitatively distinct results with higher recall of rare answers, and a better sample efficiency that allows training with little initial data. More importantly, it represents an important step towards vision-and-language methods that can learn and reason on-the-fly.", "title": "" }, { "docid": "d91077f97e745cdd73315affb5cbbdd2", "text": "We consider the problem of learning the underlying graph of an unknown Ising model on p spins from a collection of i.i.d. samples generated from the model. We suggest a new estimator that is computationally efficient and requires a number of samples that is near-optimal with respect to previously established informationtheoretic lower-bound. Our statistical estimator has a physical interpretation in terms of “interaction screening”. The estimator is consistent and is efficiently implemented using convex optimization. We prove that with appropriate regularization, the estimator recovers the underlying graph using a number of samples that is logarithmic in the system size p and exponential in the maximum coupling-intensity and maximum node-degree.", "title": "" }, { "docid": "1b52822b76e7ace1f7e12a6f2c92b060", "text": "We treated the mandibular retrusion of a 20-year-old man by distraction osteogenesis. Our aim was to avoid any visible discontinuities in the soft tissue profile that may result from conventional \"one-step\" genioplasty. The result was excellent. In addition to a good aesthetic outcome, there was increased bone formation not only between the two surfaces of the osteotomy but also adjacent to the distraction zone, resulting in improved coverage of the roots of the lower incisors. Only a few patients have been treated so far, but the method seems to hold promise for the treatment of extreme retrognathism, as these patients often have insufficient buccal bone coverage.", "title": "" }, { "docid": "1c04afe05954a425209aaf0267236255", "text": "Twitter is an online social networking service where worldwide users publish their opinions on a variety of topics, discuss current issues, complain, and express positive or negative sentiment for products they use in daily life. Therefore, Twitter is a rich source of data for opinion mining and sentiment analysis. However, sentiment analysis for Twitter messages (tweets) is regarded as a challenging problem because tweets are short and informal. This paper focuses on this problem by the analyzing of symbols called emotion tokens, including emotion symbols (e.g. emoticons and emoji ideograms). According to observation, these emotion tokens are commonly used. They directly express one’s emotions regardless of his/her language, hence they have become a useful signal for sentiment analysis on multilingual tweets. The paper describes the approach to performing sentiment analysis, that is able to determine positive, negative and neutral sentiments for a tested topic.", "title": "" }, { "docid": "c3b6d46a9e1490c720056682328586d5", "text": "BACKGROUND\nBirth preparedness and complication preparedness (BPACR) is a key component of globally accepted safe motherhood programs, which helps ensure women to reach professional delivery care when labor begins and to reduce delays that occur when mothers in labor experience obstetric complications.\n\n\nOBJECTIVE\nThis study was conducted to assess practice and factors associated with BPACR among pregnant women in Aleta Wondo district in Sidama Zone, South Ethiopia.\n\n\nMETHODS\nA community based cross sectional study was conducted in 2007, on a sample of 812 pregnant women. Data were collected using pre-tested and structured questionnaire. The collected data were analyzed by SPSS for windows version 12.0.1. The women were asked whether they followed the desired five steps while pregnant: identified a trained birth attendant, identified a health facility, arranged for transport, identified blood donor and saved money for emergency. Taking at least two steps was considered being well-prepared.\n\n\nRESULTS\nAmong 743 pregnant women only a quarter (20.5%) of pregnant women identified skilled provider. Only 8.1% identified health facility for delivery and/or for obstetric emergencies. Preparedness for transportation was found to be very low (7.7%). Considerable (34.5%) number of families saved money for incurred costs of delivery and emergency if needed. Only few (2.3%) identified potential blood donor in case of emergency. Majority (87.9%) of the respondents reported that they intended to deliver at home, and only 60(8%) planned to deliver at health facilities. Overall only 17% of pregnant women were well prepared. The adjusted multivariate model showed that significant predictors for being well-prepared were maternal availing of antenatal services (OR = 1.91 95% CI; 1.21-3.01) and being pregnant for the first time (OR = 6.82, 95% CI; 1.27-36.55).\n\n\nCONCLUSION\nBPACR practice in the study area was found to be low. Effort to increase BPACR should focus on availing antenatal care services.", "title": "" }, { "docid": "32b04b91bc796a082fb9c0d4c47efbf9", "text": "Intell Sys Acc Fin Mgmt. 2017;24:49–55. Summary A two‐step system is presented to improve prediction of telemarketing outcomes and to help the marketing management team effectively manage customer relationships in the banking industry. In the first step, several neural networks are trained with different categories of information to make initial predictions. In the second step, all initial predictions are combined by a single neural network to make a final prediction. Particle swarm optimization is employed to optimize the initial weights of each neural network in the ensemble system. Empirical results indicate that the two‐ step system presented performs better than all its individual components. In addition, the two‐ step system outperforms a baseline one where all categories of marketing information are used to train a single neural network. As a neural networks ensemble model, the proposed two‐step system is robust to noisy and nonlinear data, easy to interpret, suitable for large and heterogeneous marketing databases, fast and easy to implement.", "title": "" }, { "docid": "8ec018e0fc4ca7220387854bdd034a58", "text": "Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.", "title": "" }, { "docid": "83d788ffb340b89c482965b96d6803c2", "text": "A dead-time compensation method in voltage-source inverters (VSIs) is proposed. The method is based on a feedforward approach which produces compensating signals obtained from those of the I/sub d/-I/sub q/ current and primary angular frequency references in a rotating reference (d-q) frame. The method features excellent inverter output voltage distortion correction for both fundamental and harmonic components. The correction is not affected by the magnitude of the inverter output voltage or current distortions. Since this dead-time compensation method allows current loop calculations in the d-q frame at a slower sampling rate with a conventional microprocessor than calculations in a stationary reference frame, a fully digital, vector-controlled speed regulator with just a current component loop is realized for PWM (pulsewidth modulation) VSIs. Test results obtained for the compression method are described.<<ETX>>", "title": "" }, { "docid": "3ae6703f2ea27b1c3418ce623aa394a0", "text": "A Hardware Trojan is a malicious, undesired, intentional modification of an electronic circuit or design, resulting in the incorrect behaviour of an electronic device when in operation – a back-door that can be inserted into hardware. A Hardware Trojan may be able to defeat any and all security mechanisms (software or hardware-based) and subvert or augment the normal operation of an infected device. This may result in modifications to the functionality or specification of the hardware, the leaking of sensitive information, or a Denial of Service (DoS) attack. Understanding Hardware Trojans is vital when developing next generation defensive mechanisms for the development and deployment of electronics in the presence of the Hardware Trojan threat. Research over the past five years has primarily focussed on detecting the presence of Hardware Trojans in infected devices. This report reviews the state-of-the-art in Hardware Trojans, from the threats they pose through to modern prevention, detection and countermeasure techniques. APPROVED FOR PUBLIC RELEASE", "title": "" }, { "docid": "b5997c5c88f57b387e56dc68445b38e2", "text": "Identifying the relationship between two text objects is a core research problem underlying many natural language processing tasks. A wide range of deep learning schemes have been proposed for text matching, mainly focusing on sentence matching, question answering or query document matching. We point out that existing approaches do not perform well at matching long documents, which is critical, for example, to AI-based news article understanding and event or story formation. The reason is that these methods either omit or fail to fully utilize complicated semantic structures in long documents. In this paper, we propose a graph approach to text matching, especially targeting long document matching, such as identifying whether two news articles report the same event in the real world, possibly with different narratives. We propose the Concept Interaction Graph to yield a graph representation for a document, with vertices representing different concepts, each being one or a group of coherent keywords in the document, and with edges representing the interactions between different concepts, connected by sentences in the document. Based on the graph representation of document pairs, we further propose a Siamese Encoded Graph Convolutional Network that learns vertex representations through a Siamese neural network and aggregates the vertex features though Graph Convolutional Networks to generate the matching result. Extensive evaluation of the proposed approach based on two labeled news article datasets created at Tencent for its intelligent news products show that the proposed graph approach to long document matching significantly outperforms a wide range of state-of-the-art methods.", "title": "" }, { "docid": "ca4d2862ba75bfc35d8e9ada294192e1", "text": "This paper provides a model that realistically represents the movements in a disaster area scenario. The model is based on an analysis of tactical issues of civil protection. This analysis provides characteristics influencing network performance in public safety communication networks like heterogeneous area-based movement, obstacles, and joining/leaving of nodes. As these characteristics cannot be modelled with existing mobility models, we introduce a new disaster area mobility model. To examine the impact of our more realistic modelling, we compare it to existing ones (modelling the same scenario) using different pure movement and link based metrics. The new model shows specific characteristics like heterogeneous node density. Finally, the impact of the new model is evaluated in an exemplary simulative network performance analysis. The simulations show that the new model discloses new information and has a significant impact on performance analysis.", "title": "" }, { "docid": "6ddad64507fa5ebf3b2930c261584967", "text": "In this article we propose a methodology to determine snow cover by means of Landsat-7 ETM+ and Landsat-5 TM images, as well as an improvement in daily Snow Cover TERRA- MODIS product (MOD10A1), between 2002 and 2005. Both methodologies are based on a NDSI threshold > 0.4. In the Landsat case, and although this threshold also selects water bodies, we have obtained optimal results using a mask of water bodies and generating a pre-boundary snow mask around the snow cover. Moreover, an important improvement in snow cover mapping in shadow cast areas by means of a hybrid classification has been obtained. Using these results as ground truth we have verified MODIS Snow Cover product using coincident dates. In the MODIS product, we have noted important commission errors in water bodies, forest covers and orographic shades because of the NDVI-NDSI filter applied to this product. In order to improve MODIS snow cover determination using MODIS images, we propose a hybrid methodology based on experience with Landsat images, which provide greater spatial resolution.", "title": "" }, { "docid": "9841dd0b1c71f33f9fae95b6621b5ecc", "text": "In recent years the number of wind turbines installed in Europe and other continents has increase dramatically. Appropriate lightning protection is required in order to avoid costly replacements of lightning damaged turbine blades, components of the electronic control system, and/or temporary loss of energy production. Depending on local site conditions elevated objects with heights of 100 m and more can frequently initiate upward lightning. From the 100 m high and instrumented radio tower on Gaisberg in Austria more than 50 flashes per year are initiated and measured. Also lightning location systems or video studies in Japan [1], [2] or in the US [3] show frequent occurrence of lightning initiated from wind turbines, especially during cold season. Up to now no reliable method exists to estimate the expected frequency of upward lightning for a given structure and location. About half of the flashes observed at the GBT are of ICCOnly type. Unfortunately this type of discharge is not detected by lightning location systems as its current waveform does not show any fast rising and high peak current pulses as typical for first or subsequent return strokes in downward lightning (cloud-to-ground). Nevertheless some of this ICCOnly type discharges transferred the highest amount of charge, exceeding the 300 C specified in IEC 62305 for lightning protection level LPL I.", "title": "" }, { "docid": "a28a96adfef7854a864e45c4351e1bd5", "text": "In the real-time bidding (RTB) display advertising ecosystem, when receiving a bid request, the demandside platform (DSP) needs to predict the click-through rate (CTR) for ads and calculate the bid price according to the CTR estimated. In addition to challenges similar to those encountered in sponsored search advertising, such as data sparsity and cold start problems, more complicated feature interactions involving multi-aspects, such as the user, publisher and advertiser, make CTR estimation in RTB more difficult. We consider CTR estimation in RTB as a tensor complement problem and propose a fully coupled interactions tensor factorization (FCTF) model based on Tucker decomposition (TD) to model three pairwise interactions between the user, publisher and advertiser and ultimately complete the tensor complement task. FCTF is a special case of the Tucker decomposition model; however, it is linear in runtime for both learning and prediction. Different from pairwise interaction tensor factorization (PITF), which is another special case of TD, FCTF is independent from the Bayesian personalized ranking optimization algorithm and is applicable to generic third-order tensor decomposition with popular simple optimizations, such as the least square method or mean square error. In addition, we also incorporate all explicit information obtained from different aspects into the FCTF model to alleviate the impact of cold start and sparse data on the final performance. We compare the performance and runtime complexity of our method with Tucker decomposition, canonical decomposition and other popular methods for CTR prediction over real-world advertising datasets. Our experimental results demonstrate that the improved model not only achieves better prediction quality than the others due to considering fully coupled interactions between three entities, user, publisher and advertiser but also can accomplish training and prediction with linear runtime. 2016 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
6880869271712d6e15c78f941450f136
Noninvasive Brain-Computer Interface: Decoding Arm Movement Kinematics and Motor Control
[ { "docid": "6865c344849ec96d79e7a83a2ab559b1", "text": "A brain-computer interface (BCI) acquires brain signals, extracts informative features, and translates these features to commands to control an external device. This paper investigates the application of a noninvasive electroencephalography (EEG)based BCI to identify brain signal features in regard to actual hand movement speed. This provides a more refined control for a BCI system in terms of movement parameters. An experiment was performed to collect EEG data from subjects while they performed right-hand movement at two different speeds, namely fast and slow, in four different directions. The informative features from the data were obtained using the Wavelet-Common Spatial Pattern (W-CSP) algorithm that provided high-temporal-spatial-spectral resolution. The applicability of these features to classify the two speeds and to reconstruct the speed profile was studied. The results for classifying speed across seven subjects yielded a mean accuracy of 83.71% using a Fisher Linear Discriminant (FLD) classifier. The speed components were reconstructed using multiple linear regression and significant correlation of 0.52 (Pearson's linear correlation coefficient) was obtained between recorded and reconstructed velocities on an average. The spatial patterns of the W-CSP features obtained showed activations in parietal and motor areas of the brain. The results achieved promises to provide a more refined control in BCI by including control of movement speed.", "title": "" } ]
[ { "docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24", "text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.", "title": "" }, { "docid": "5b16933905d36ba54ab74743251d7ca7", "text": "The explosive growth of the user-generated content on the Web has offered a rich data source for mining opinions. However, the large number of diverse review sources challenges the individual users and organizations on how to use the opinion information effectively. Therefore, automated opinion mining and summarization techniques have become increasingly important. Different from previous approaches that have mostly treated product feature and opinion extraction as two independent tasks, we merge them together in a unified process by using probabilistic models. Specifically, we treat the problem of product feature and opinion extraction as a sequence labeling task and adopt Conditional Random Fields models to accomplish it. As part of our work, we develop a computational approach to construct domain specific sentiment lexicon by combining semi-structured reviews with general sentiment lexicon, which helps to identify the sentiment orientations of opinions. Experimental results on two real world datasets show that the proposed method is effective.", "title": "" }, { "docid": "68c02f7658cb55a00f3a71923cf6dd2e", "text": "Anterior insula and adjacent frontal operculum (hereafter referred to as IFO) are active during exposure to tastants/odorants (particularly disgusting ones), and during the viewing of disgusted facial expressions. Together with lesion data, the IFO has thus been proposed to be crucial in processing disgust-related stimuli. Here, we examined IFO involvement in the processing of other people's gustatory emotions more generally by exposing participants to food-related disgusted, pleased and neutral facial expressions during functional magnetic resonance imaging (fMRI). We then exposed participants to pleasant, unpleasant and neutral tastants for the purpose of mapping their gustatory IFO. Finally, we associated participants' self reported empathy (measured using the Interpersonal Reactivity Index, IRI) with their IFO activation during the witnessing of others' gustatory emotions. We show that participants' empathy scores were predictive of their gustatory IFO activation while witnessing both the pleased and disgusted facial expression of others. While the IFO has been implicated in the processing of negative emotions of others and empathy for negative experiences like pain, our finding extends this concept to empathy for intense positive feelings, and provides empirical support for the view that the IFO contributes to empathy by mapping the bodily feelings of others onto the internal bodily states of the observer, in agreement with the putative interoceptive function of the IFO.", "title": "" }, { "docid": "0583b36c9dfa3080ab94b16a7410b7cd", "text": "In this paper we present a simple yet effective approach to automatic OCR error detection and correction on a corpus of French clinical reports of variable OCR quality within the domain of foetopathology. While traditional OCR error detection and correction systems rely heavily on external information such as domain-specific lexicons, OCR process information or manually corrected training material, these are not always available given the constraints placed on using medical corpora. We therefore propose a novel method that only needs a representative corpus of acceptable OCR quality in order to train models. Our method uses recurrent neural networks (RNNs) to model sequential information on character level for a given medical text corpus. By inserting noise during the training process we can simultaneously learn the underlying (character-level) language model and as well as learning to detect and eliminate random noise from the textual input. The resulting models are robust to the variability of OCR quality but do not require additional, external information such as lexicons. We compare two different ways of injecting noise into the training process and evaluate our models on a manually corrected data set. We find that the best performing system achieves a 73% accuracy.", "title": "" }, { "docid": "1050845816f29b50360eb6f2277071be", "text": "Natural language interactive narratives are a variant of traditional branching storylines where player actions are expressed in natural language rather than by selecting among choices. Previous efforts have handled the richness of natural language input using machine learning technologies for text classification, bootstrapping supervised machine learning approaches with human-in-the-loop data acquisition or by using expected player input as fake training data. This paper explores a third alternative, where unsupervised text classifiers are used to automatically route player input to the most appropriate storyline branch. We describe the Data-driven Interactive Narrative Engine (DINE), a web-based tool for authoring and deploying natural language interactive narratives. To compare the performance of different algorithms for unsupervised text classification, we collected thousands of user inputs from hundreds of crowdsourced participants playing 25 different scenarios, and hand-annotated them to create a goldstandard test set. Through comparative evaluations, we identified an unsupervised algorithm for narrative text classification that approaches the performance of supervised text classification algorithms. We discuss how this technology supports authors in the rapid creation and deployment of interactive narrative experiences, with authorial burdens similar to that of traditional branching storylines.", "title": "" }, { "docid": "2e4d1b5b1c1a8dbeba0d17025f2a2471", "text": "In this age of globalization, the need for competent legal translators is greater than ever. This perhaps explains the growing interest in legal translation not only by linguists but also by lawyers, the latter especially over the past 10 years (cf. Berteloot, 1999:101). Although Berteloot maintains that lawyers analyze the subject matter from a different perspective, she advises her colleagues also to take account of contributions by linguists (ibid.). I assume this includes translation theory as well. In the past, both linguists and lawyers have attempted to apply theories of general translation to legal texts, such as Catford’s concept of situation equivalence (Kielar, 1977:33), Nida’s theory of formal correspondence (Weisflog, 1987:187, 191); also in Weisflog 1996:35), and, more recently, Vermeer’s skopos theory (see Madsen’s, 1997:17-26). While some legal translators seem content to apply principles of general translation theory (Koutsivitis, 1988:37), others dispute the usefulness of translation theory for legal translation (Weston, 1991:1). The latter view is not surprising since special methods and techniques are required in legal translation, a fact confirmed by Bocquet, who recognizes the importance of establishing a theory or at least a theoretical framework that is practice oriented (1994). By analyzing legal translation as an act of communication in the mechanism of the law, my book New Approach to Legal Translation (1997) attempts to provide a theoretical basis for legal translation within the framework of modern translation theory.", "title": "" }, { "docid": "1fcbc7d6c408d00d3bd1e225e28a32cc", "text": "Active learning aims to train an accurate prediction model with minimum cost by labeling most informative instances. In this paper, we survey existing works on active learning from an instance-selection perspective and classify them into two categories with a progressive relationship: (1) active learning merely based on uncertainty of independent and identically distributed (IID) instances, and (2) active learning by further taking into account instance correlations. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/weaknesses, followed by a simple runtime performance comparison, and discussion about emerging active learning applications and instance-selection challenges therein. This survey intends to provide a high-level summarization for active learning and motivates interested readers to consider instance-selection approaches for designing effective active learning solutions.", "title": "" }, { "docid": "700d3e2cb64624df33ef411215d073ab", "text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.", "title": "" }, { "docid": "81fa6a7931b8d5f15d55316a6ed1d854", "text": "The objective of the study is to compare skeletal and dental changes in class II patients treated with fixed functional appliances (FFA) that pursue different biomechanical concepts: (1) FMA (Functional Mandibular Advancer) from first maxillary molar to first mandibular molar through inclined planes and (2) Herbst appliance from first maxillary molar to lower first bicuspid through a rod-and-tube mechanism. Forty-two equally distributed patients were treated with FMA (21) and Herbst appliance (21), following a single-step advancement protocol. Lateral cephalograms were available before treatment and immediately after removal of the FFA. The lateral cephalograms were analyzed with customized linear measurements. The actual therapeutic effect was then calculated through comparison with data from a growth survey. Additionally, the ratio of skeletal and dental contributions to molar and overjet correction for both FFA was calculated. Data was analyzed by means of one-sample Student’s t tests and independent Student’s t tests. Statistical significance was set at p < 0.05. Although differences between FMA and Herbst appliance were found, intergroup comparisons showed no statistically significant differences. Almost all measurements resulted in comparable changes for both appliances. Statistically significant dental changes occurred with both appliances. Dentoalveolar contribution to the treatment effect was ≥70%, thus always resulting in ≤30% for skeletal alterations. FMA and Herbst appliance usage results in comparable skeletal and dental treatment effects despite different biomechanical approaches. Treatment leads to overjet and molar relationship correction that is mainly caused by significant dentoalveolar changes.", "title": "" }, { "docid": "d66799a5d65a6f23527a33b124812ea6", "text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.", "title": "" }, { "docid": "b2c03d8e54a2a6840f6688ab9682e24b", "text": "Path following and follow-the-leader motion is particularly desirable for minimally-invasive surgery in confined spaces which can only be reached using tortuous paths, e.g. through natural orifices. While path following and followthe- leader motion can be achieved by hyper-redundant snake robots, their size is usually not applicable for medical applications. Continuum robots, such as tendon-driven or concentric tube mechanisms, fulfill the size requirements for minimally invasive surgery, but yet follow-the-leader motion is not inherently provided. In fact, parameters of the manipulator's section curvatures and translation have to be chosen wisely a priori. In this paper, we consider a tendon-driven continuum robot with extensible sections. After reformulating the forward kinematics model, we formulate prerequisites for follow-the-leader motion and present a general approach to determine a sequence of robot configurations to achieve follow-the-leader motion along a given 3D path. We evaluate our approach in a series of simulations with 3D paths composed of constant curvature arcs and general 3D paths described by B-spline curves. Our results show that mean path errors <;0.4mm and mean tip errors <;1.6mm can theoretically be achieved for constant curvature paths and <;2mm and <;3.1mm for general B-spline curves respectively.", "title": "" }, { "docid": "52606d9059e08bda1bd837c8e5b8296b", "text": "The problem of point of interest (POI) recommendation is to provide personalized recommendations of places, such as restaurants and movie theaters. The increasing prevalence of mobile devices and of location based social networks (LBSNs) poses significant new opportunities as well as challenges, which we address. The decision process for a user to choose a POI is complex and can be influenced by numerous factors, such as personal preferences, geographical considerations, and user mobility behaviors. This is further complicated by the connection LBSNs and mobile devices. While there are some studies on POI recommendations, they lack an integrated analysis of the joint effect of multiple factors. Meanwhile, although latent factor models have been proved effective and are thus widely used for recommendations, adopting them to POI recommendations requires delicate consideration of the unique characteristics of LBSNs. To this end, in this paper, we propose a general geographical probabilistic factor model (Geo-PFM) framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, user mobility behaviors can be effectively leveraged in the recommendation model. Moreover, based our Geo-PFM framework, we further develop a Poisson Geo-PFM which provides a more rigorous probabilistic generative process for the entire model and is effective in modeling the skewed user check-in count data as implicit feedback for better POI recommendations. Finally, extensive experimental results on three real-world LBSN datasets (which differ in terms of user mobility, POI geographical distribution, implicit response data skewness, and user-POI observation sparsity), show that the proposed recommendation methods outperform state-of-the-art latent factor models by a significant margin.", "title": "" }, { "docid": "10c926cbfe4339a3a5279e238bc1b0a7", "text": "Health outcomes in modern society are often shaped by peer interactions. Increasingly, a significant fraction of such interactions happen online and can have an impact on various mental health and behavioral health outcomes. Guided by appropriate social and psychological research, we conduct an observational study to understand the interactions between clinically depressed users and their ego-network when contrasted with a differential control group of normal users and their ego-network. Specifically, we examine if one can identify relevant linguistic and emotional signals from social media exchanges to detect symptomatic cues of depression. We observe significant deviations in the behavior of depressed users from the control group. Reduced and nocturnal online activity patterns, reduced active and passive network participation, increase in negative sentiment or emotion, distinct linguistic styles (e.g. self-focused pronoun usage), highly clustered and tightly-knit neighborhood structure, and little to no exchange of influence between depressed users and their ego-network over time are some of the observed characteristics. Based on our observations, we then describe an approach to extract relevant features and show that building a classifier to predict depression based on such features can achieve an F-score of 90%.", "title": "" }, { "docid": "ca62a58ac39d0c2daaa573dcb91cd2e0", "text": "Blast-related head injuries are one of the most prevalent injuries among military personnel deployed in service of Operation Iraqi Freedom. Although several studies have evaluated symptoms after blast injury in military personnel, few studies compared them to nonblast injuries or measured symptoms within the acute stage after traumatic brain injury (TBI). Knowledge of acute symptoms will help deployed clinicians make important decisions regarding recommendations for treatment and return to duty. Furthermore, differences more apparent during the acute stage might suggest important predictors of the long-term trajectory of recovery. This study evaluated concussive, psychological, and cognitive symptoms in military personnel and civilian contractors (N = 82) diagnosed with mild TBI (mTBI) at a combat support hospital in Iraq. Participants completed a clinical interview, the Automated Neuropsychological Assessment Metric (ANAM), PTSD Checklist-Military Version (PCL-M), Behavioral Health Measure (BHM), and Insomnia Severity Index (ISI) within 72 hr of injury. Results suggest that there are few differences in concussive symptoms, psychological symptoms, and neurocognitive performance between blast and nonblast mTBIs, although clinically significant impairment in cognitive reaction time for both blast and nonblast groups is observed. Reductions in ANAM accuracy were related to duration of loss of consciousness, not injury mechanism.", "title": "" }, { "docid": "4445f128f31d6f42750049002cb86a29", "text": "Convolutional neural networks are a popular choice for current object detection and classification systems. Their performance improves constantly but for effective training, large, hand-labeled datasets are required. We address the problem of obtaining customized, yet large enough datasets for CNN training by synthesizing them in a virtual world, thus eliminating the need for tedious human interaction for ground truth creation. We developed a CNN-based multi-class detection system that was trained solely on virtual world data and achieves competitive results compared to state-of-the-art detection systems.", "title": "" }, { "docid": "940e7dc630b7dcbe097ade7abb2883a4", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "6964d3ac400abd6ace1ed48c36d68d06", "text": "Sentiment Analysis (SA) is indeed a fascinating area of research which has stolen the attention of researchers as it has many facets and more importantly it promises economic stakes in the corporate and governance sector. SA has been stemmed out of text analytics and established itself as a separate identity and a domain of research. The wide ranging results of SA have proved to influence the way some critical decisions are taken. Hence, it has become relevant in thorough understanding of the different dimensions of the input, output and the processes and approaches of SA.", "title": "" }, { "docid": "2a7b7d9fab496be18f6bf50add2f7b1e", "text": "BACKROUND\nSuperior Mesenteric Artery Syndrome (SMAS) is a rare disorder caused by compression of the third portion of the duodenum by the SMA. Once a conservative approach fails, usual surgical strategies include Duodenojejunostomy and Strong's procedure. The latter avoids potential anastomotic risks and complications. Robotic Strong's procedure (RSP) combines both the benefits of a minimal invasive approach and also enchased robotic accuracy and efficacy.\n\n\nMETHODS\nFor a young girl who was unsuccessfully treated conservatively, the paper describes the RSP surgical technique. To the authors' knowledge, this is the first report in the literature.\n\n\nRESULTS\nMinimal blood loss, short operative time, short hospital stay and early recovery were the short-term benefits. Significant weight gain was achieved three months after the surgery.\n\n\nCONCLUSION\nBased on primary experience, it is suggested that RSP is a very effective alternative in treating SMAS.", "title": "" } ]
scidocsrr
ac848e7f7162ffd6b2a022426906e321
Visual Learning of Arithmetic Operations
[ { "docid": "2f20bca0134eb1bd9d65c4791f94ddcc", "text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "title": "" } ]
[ { "docid": "90eb392765c01b6166daa2a7a62944d1", "text": "Recent studies have demonstrated the potential for reducing energy consumption in integrated circuits by allowing errors during computation. While most proposed techniques for achieving this rely on voltage overscaling (VOS), this paper shows that Imprecise Hardware (IHW) with design-time structural parameters can achieve orthogonal energy-quality tradeoffs. Two IHW adders are improved and two IHW multipliers are introduced in this paper. In addition, a simulation-free error estimation technique is proposed to rapidly and accurately estimate the impact of IHW on output quality. Finally, a quality-aware energy minimization methodology is presented. To validate this methodology, experiments are conducted on two computational kernels: DOT-PRODUCT and L2-NORM -- used in three applications -- Leukocyte Tracker, SVM classification and K-means clustering. Results show that the Hellinger distance between estimated and simulated error distribution is within 0.05 and that the methodology enables designers to explore energy-quality tradeoffs with significant reduction in simulation complexity.", "title": "" }, { "docid": "6b94f2f88fb62de5bec8ae0ace3afa1c", "text": "The purpose of this paper is to design a microstrip patch antenna with low pass filter for efficient rectenna design this structure having the property of rejecting higher harmonics than 2GHz. As the design frequency is 2GHz.in first step we design a patch antenna in second step we design patch antenna with low pass filter and combine these two. The IE3D software is used for the simulation of this structure.", "title": "" }, { "docid": "2f83b2ef8f71c56069304b0962074edc", "text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.", "title": "" }, { "docid": "8c4ece41e96c08536375e9e72dc9ddc3", "text": "BACKGROUND\nWe present one unusual case of anophthalmia and craniofacial cleft, probably due to congenital toxoplasmosis only.\n\n\nCASE PRESENTATION\nA two-month-old male had a twin in utero who disappeared between the 7th and the 14th week of gestation. At birth, the baby presented anophthalmia and craniofacial cleft, and no sign compatible with genetic or exposition/deficiency problems, like the Wolf-Hirschhorn syndrome or maternal vitamin A deficiency. Congenital toxoplasmosis was confirmed by the presence of IgM abs and IgG neo-antibodies in western blot, as well as by real time PCR in blood. CMV infection was also discarded by PCR and IgM negative results. Structures suggestive of T. gondii pseudocysts were observed in a biopsy taken during the first functional/esthetic surgery.\n\n\nCONCLUSIONS\nWe conclude that this is a rare case of anophthalmia combined with craniofacial cleft due to congenital toxoplasmosis, that must be considered by physicians. This has not been reported before.", "title": "" }, { "docid": "c3c5931200ff752d8285cc1068e779ee", "text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.", "title": "" }, { "docid": "397c25e6381818eabadf23d214409e45", "text": "s of Invited Talks Plagiarizing Nature for Engineering Analysis and Design", "title": "" }, { "docid": "4f44b685adc7e63f18a40d0f3fc25585", "text": "Computational Thinking (CT) has become popular in recent years and has been recognised as an essential skill for all, as members of the digital age. Many researchers have tried to define CT and have conducted studies about this topic. However, CT literature is at an early stage of maturity, and is far from either explaining what CT is, or how to teach and assess this skill. In the light of this state of affairs, the purpose of this study is to examine the purpose, target population, theoretical basis, definition, scope, type and employed research design of selected papers in the literature that have focused on computational thinking, and to provide a framework about the notion, scope and elements of CT. In order to reveal the literature and create the framework for computational thinking, an inductive qualitative content analysis was conducted on 125 papers about CT, selected according to pre-defined criteria from six different databases and digital libraries. According to the results, the main topics covered in the papers composed of activities (computerised or unplugged) that promote CT in the curriculum. The targeted population of the papers was mainly K-12. Gamed-based learning and constructivism were the main theories covered as the basis for CT papers. Most of the papers were written for academic conferences and mainly composed of personal views about CT. The study also identified the most commonly used words in the definitions and scope of CT, which in turn formed the framework of CT. The findings obtained in this study may not only be useful in the exploration of research topics in CT and the identification of CT in the literature, but also support those who need guidance for developing tasks or programs about computational thinking and informatics.", "title": "" }, { "docid": "8a9603a10e5e02f6edfbd965ee11bbb9", "text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.", "title": "" }, { "docid": "28b15544f3e054ca483382a471c513e5", "text": "In this work, design and control system development of a gas-electric hybrid quad tilt-rotor UAV with morphing wing are presented. The proposed aircraft has an all carbon-composite body, gas-electric hybrid electric generation system for 3 hours hovering or up to 10 hours of horizontal flight, a novel configuration for VTOL and airplane-like flights with minimized aerodynamic costs and mechanical morphing wings for both low speed and high speed horizontal flights. The mechanical design of the vehicle is performed to achieve a strong and light-weight structure, whereas the aerodynamic and propulsion system designs are aimed for accomplishing both fixed wing and rotary wing aircraft flights with maximized flight endurance. A detailed dynamic model of the aerial vehicle is developed including the effects of tilting rotors, variable fuel weight, and morphing wing lift-drag forces and pitching moments. Control system is designed for both flight regimes and flight simulations are carried out to check the performance of the proposed control system.", "title": "" }, { "docid": "5571389dcc25cbcd9c68517934adce1d", "text": "The polysaccharide-containing extracellular fractions (EFs) of the edible mushroom Pleurotus ostreatus have immunomodulating effects. Being aware of these therapeutic effects of mushroom extracts, we have investigated the synergistic relations between these extracts and BIAVAC and BIAROMVAC vaccines. These vaccines target the stimulation of the immune system in commercial poultry, which are extremely vulnerable in the first days of their lives. By administrating EF with polysaccharides from P. ostreatus to unvaccinated broilers we have noticed slow stimulation of maternal antibodies against infectious bursal disease (IBD) starting from four weeks post hatching. For the broilers vaccinated with BIAVAC and BIAROMVAC vaccines a low to almost complete lack of IBD maternal antibodies has been recorded. By adding 5% and 15% EF in the water intake, as compared to the reaction of the immune system in the previous experiment, the level of IBD antibodies was increased. This has led us to believe that by using this combination of BIAVAC and BIAROMVAC vaccine and EF from P. ostreatus we can obtain good results in stimulating the production of IBD antibodies in the period of the chicken first days of life, which are critical to broilers' survival. This can be rationalized by the newly proposed reactivity biological activity (ReBiAc) principles by examining the parabolic relationship between EF administration and recorded biological activity.", "title": "" }, { "docid": "59ce42be854ceb6a92579b43442f016c", "text": "This paper presents the design, fabrication, and characterization of the SiC JBSFET (junction barrier Schottky (JBS) diode integrated MOSFET). The fabrication of the JBSFET adopted a novel single metal, single thermal treatment process to simultaneously form ohmic contacts on n+, p+ implanted regions, and Schottky contact on the n-4H-SiC epilayer. The presented SiC JBSFET uses 40% smaller wafer area because the diode and MOSFET share the edge termination as well as the current conducting drift region. The proposed single chip solution of MOSFET/JBS diode functionalities eliminates the parasitic inductance between separately packaged devices allowing a higher frequency operation in a power converter.", "title": "" }, { "docid": "7c0586335facd8388814f863e19e3d06", "text": "OBJECTIVE\nWe reviewed randomized controlled trials of complementary and alternative medicine (CAM) treatments for depression, anxiety, and sleep disturbance in nondemented older adults.\n\n\nDATA SOURCES\nWe searched PubMed (1966-September 2006) and PsycINFO (1984-September 2006) databases using combinations of terms including depression, anxiety, and sleep; older adult/elderly; randomized controlled trial; and a list of 56 terms related to CAM.\n\n\nSTUDY SELECTION\nOf the 855 studies identified by database searches, 29 met our inclusion criteria: sample size >or= 30, treatment duration >or= 2 weeks, and publication in English. Four additional articles from manual bibliography searches met inclusion criteria, totaling 33 studies.\n\n\nDATA EXTRACTION\nWe reviewed identified articles for methodological quality using a modified Scale for Assessing Scientific Quality of Investigations (SASQI). We categorized a study as positive if the CAM therapy proved significantly more effective than an inactive control (or as effective as active control) on at least 1 primary psychological outcome. Positive and negative studies were compared on the following characteristics: CAM treatment category, symptom(s) assessed, country where the study was conducted, sample size, treatment duration, and mean sample age.\n\n\nDATA SYNTHESIS\n67% of the 33 studies reviewed were positive. Positive studies had lower SASQI scores for methodology than negative studies. Mind-body and body-based therapies had somewhat higher rates of positive results than energy- or biologically-based therapies.\n\n\nCONCLUSIONS\nMost studies had substantial methodological limitations. A few well-conducted studies suggested therapeutic potential for certain CAM interventions in older adults (e.g., mind-body interventions for sleep disturbances and acupressure for sleep and anxiety). More rigorous research is needed, and suggestions for future research are summarized.", "title": "" }, { "docid": "87ab746df486a15b895cc0a4706db6c7", "text": "Many complex systems in the real world can be modeled as signed social networks that contain both positive and negative relations. Algorithms for mining social networks have been developed in the past; however, most of them were designed primarily for networks containing only positive relations and, thus, are not suitable for signed networks. In this work, we propose a new algorithm, called FEC, to mine signed social networks where both positive within-group relations and negative between-group relations are dense. FEC considers both the sign and the density of relations as the clustering attributes, making it effective for not only signed networks but also conventional social networks including only positive relations. Also, FEC adopts an agent-based heuristic that makes the algorithm efficient (in linear time with respect to the size of a network) and capable of giving nearly optimal solutions. FEC depends on only one parameter whose value can easily be set and requires no prior knowledge on hidden community structures. The effectiveness and efficacy of FEC have been demonstrated through a set of rigorous experiments involving both benchmark and randomly generated signed networks.", "title": "" }, { "docid": "bbfc488e55fe2dfaff2af73a75c31edd", "text": "This overview covers a wide range of cannabis topics, initially examining issues in dispensaries and self-administration, plus regulatory requirements for production of cannabis-based medicines, particularly the Food and Drug Administration \"Botanical Guidance.\" The remainder pertains to various cannabis controversies that certainly require closer examination if the scientific, consumer, and governmental stakeholders are ever to reach consensus on safety issues, specifically: whether botanical cannabis displays herbal synergy of its components, pharmacokinetics of cannabis and dose titration, whether cannabis medicines produce cyclo-oxygenase inhibition, cannabis-drug interactions, and cytochrome P450 issues, whether cannabis randomized clinical trials are properly blinded, combatting the placebo effect in those trials via new approaches, the drug abuse liability (DAL) of cannabis-based medicines and their regulatory scheduling, their effects on cognitive function and psychiatric sequelae, immunological effects, cannabis and driving safety, youth usage, issues related to cannabis smoking and vaporization, cannabis concentrates and vape-pens, and laboratory analysis for contamination with bacteria and heavy metals. Finally, the issue of pesticide usage on cannabis crops is addressed. New and disturbing data on pesticide residues in legal cannabis products in Washington State are presented with the observation of an 84.6% contamination rate including potentially neurotoxic and carcinogenic agents. With ongoing developments in legalization of cannabis in medical and recreational settings, numerous scientific, safety, and public health issues remain.", "title": "" }, { "docid": "c691820eec90395366a415f19b2e8764", "text": "This study attempts to identify the salient factors affecting tourist food consumption. By reviewing available studies in the hospitality and tourism literature and synthesising insights from food consumption and sociological research, five socio-cultural and psychological factors influencing tourist food consumption are identified: cultural/religious influences, socio-demographic factors, food-related personality traits, exposure effect/past experience, and motivational factors. The findings further suggest that the motivational factors can be categorised into five main dimensions: symbolic, obligatory, contrast, extension, and pleasure. Given the lack of research in examining tourist food consumption systematically, the multidisciplinary approach adopted in this study allows a comprehensive understanding of the phenomenon which forms the basis for further research and conceptual elaboration.", "title": "" }, { "docid": "e79abaaa50d8ab8938f1839c7e4067f9", "text": "We review the objectives and techniques used in the control of horizontal axis wind turbines at the individual turbine level, where controls are applied to the turbine blade pitch and generator. The turbine system is modeled as a flexible structure operating in the presence of turbulent wind disturbances. Some overview of the various stages of turbine operation and control strategies used to maximize energy capture in below rated wind speeds is given, but emphasis is on control to alleviate loads when the turbine is operating at maximum power. After reviewing basic turbine control objectives, we provide an overview of the common basic linear control approaches and then describe more advanced control architectures and why they may provide significant advantages.", "title": "" }, { "docid": "e7b9c3ef571770788cd557f8c4843bcf", "text": "Different efforts have been done to address the problem of information overload on the Internet. Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. In this paper, we propose an algorithm to solve the web page recommendation problem. In our algorithm, we use distributed learning automata to learn the behavior of previous users’ and recommend pages to the current user based on learned pattern. Our experiments on real data set show that the proposed algorithm performs better than the other algorithms that we compared to and, at the same time, it is less complex than other algorithms with respect to memory usage and computational cost too.", "title": "" }, { "docid": "e605e0417160dec6badddd14ec093843", "text": "Within both academic and policy discourses, the concept of media literacy is being extended from its traditional focus on print and audiovisual media to encompass the internet and other new media. The present article addresses three central questions currently facing the public, policy-makers and academy: What is media literacy? How is it changing? And what are the uses of literacy? The article begins with a definition: media literacy is the ability to access, analyse, evaluate and create messages across a variety of contexts. This four-component model is then examined for its applicability to the internet. Having advocated this skills-based approach to media literacy in relation to the internet, the article identifies some outstanding issues for new media literacy crucial to any policy of promoting media literacy among the population. The outcome is to extend our understanding of media literacy so as to encompass the historically and culturally conditioned relationship among three processes: (i) the symbolic and material representation of knowledge, culture and values; (ii) the diffusion of interpretative skills and abilities across a (stratified) population; and (iii) the institutional, especially, the state management of the power that access to and skilled use of knowledge brings to those who are ‘literate’.", "title": "" }, { "docid": "e1dd2a719d3389a11323c5245cd2b938", "text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.", "title": "" }, { "docid": "a68cec6fd069499099c8bca264eb0982", "text": "The anti-saccade task has emerged as an important task for investigating the flexible control that we have over behaviour. In this task, participants must suppress the reflexive urge to look at a visual target that appears suddenly in the peripheral visual field and must instead look away from the target in the opposite direction. A crucial step involved in performing this task is the top-down inhibition of a reflexive, automatic saccade. Here, we describe recent neurophysiological evidence demonstrating the presence of this inhibitory function in single-cell activity in the frontal eye fields and superior colliculus. Patients diagnosed with various neurological and/or psychiatric disorders that affect the frontal lobes or basal ganglia find it difficult to suppress the automatic pro-saccade, revealing a deficit in top-down inhibition.", "title": "" } ]
scidocsrr
4a37ef1f710754020c100affd2bd5ff0
Topic Modelling with Word Embeddings
[ { "docid": "f6121f69419a074b657bb4a0324bae4a", "text": "Latent Dirichlet allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Increasingly, topic modeling needs to scale to larger topic spaces and use richer forms of prior knowledge, such as word correlations or document labels. However, inference is cumbersome for LDA models with prior knowledge. As a result, LDA models that use prior knowledge only work in small-scale scenarios. In this work, we propose a factor graph framework, Sparse Constrained LDA (SC-LDA), for efficiently incorporating prior knowledge into LDA. We evaluate SC-LDA’s ability to incorporate word correlation knowledge and document label knowledge on three benchmark datasets. Compared to several baseline methods, SC-LDA achieves comparable performance but is significantly faster. 1 Challenge: Leveraging Prior Knowledge in Large-scale Topic Models Topic models, such as Latent Dirichlet Allocation (Blei et al., 2003, LDA), have been successfully used for discovering hidden topics in text collections. LDA is an unsupervised model—it requires no annotation—and discovers, without any supervision, the thematic trends in a text collection. However, LDA’s lack of supervision can lead to disappointing results. Often, the hidden topics learned by LDA fail to make sense to end users. Part of the problem is that the objective function of topic models does not always correlate with human judgments of topic quality (Chang et al., 2009). Therefore, it’s often necessary to incorporate prior knowledge into topic models to improve the model’s performance. Recent work has also shown that by interactive human feedback can improve the quality and stability of topics (Hu and Boyd-Graber, 2012; Yang et al., 2015). Information about documents (Ramage et al., 2009) or words (Boyd-Graber et al., 2007) can improve LDA’s topics. In addition to its occasional inscrutability, scalability can also hamper LDA’s adoption. Conventional Gibbs sampling—the most widely used inference for LDA—scales linearly with the number of topics. Moreover, accurate training usually takes many sampling passes over the dataset. Therefore, for large datasets with millions or even billions of tokens, conventional Gibbs sampling takes too long to finish. For standard LDA, recently introduced fast sampling methods (Yao et al., 2009; Li et al., 2014; Yuan et al., 2015) enable industrial applications of topic modeling to search engines and online advertising, where capturing the “long tail” of infrequently used topics requires large topic spaces. For example, while typical LDA models in academic papers have up to 103 topics, industrial applications with 105–106 topics are common (Wang et al., 2014). Moreover, scaling topic models to many topics can also reveal the hierarchical structure of topics (Downey et al., 2015). Thus, there is a need for topic models that can both benefit from rich prior information and that can scale to large datasets. However, existing methods for improving scalability focus on topic models without prior information. To rectify this, we propose a factor graph model that encodes a potential function over the hidden topic variables, encouraging topics consistent with prior knowledge. The factor model representation admits an efficient sampling algorithm that takes advantage of the model’s sparsity. We show that our method achieves comparable performance but runs significantly faster than baseline methods, enabling models to discover models with many topics enriched by prior knowledge. 2 Efficient Algorithm for Incorporating Knowledge into LDA In this section, we introduce the factor model for incorporating prior knowledge and show how to efficiently use Gibbs sampling for inference. 2.1 Background: LDA and SparseLDA A statistical topic model represents words in documents in a collection D as mixtures of T topics, which are multinomials over a vocabulary of size V . In LDA, each document d is associated with a multinomial distribution over topics, θd. The probability of a word type w given topic z is φw|z . The multinomial distributions θd and φz are drawn from Dirichlet distributions: α and β are the hyperparameters for θ and φ. We represent the document collection D as a sequence of words w, and topic assignments as z. We use symmetric priors α and β in the model and experiment, but asymmetric priors are easily encoded in the models (Wallach et al., 2009). Discovering the latent topic assignments z from observed words w requires inferring the the posterior distribution P (z|w). Griffiths and Steyvers (2004) propose using collapsed Gibbs sampling. The probability of a topic assignment z = t in document d given an observed word type w and the other topic assignments z− is P (z = t|z−, w) ∝ (nd,t + α) nw,t + β", "title": "" } ]
[ { "docid": "8d0221daae5933760698b8f4f7943870", "text": "We introduce a novel, online method to predict pedestrian trajectories using agent-based velocity-space reasoning for improved human-robot interaction and collision-free navigation. Our formulation uses velocity obstacles to model the trajectory of each moving pedestrian in a robot’s environment and improves the motion model by adaptively learning relevant parameters based on sensor data. The resulting motion model for each agent is computed using statistical inferencing techniques, including a combination of Ensemble Kalman filters and a maximum-likelihood estimation algorithm. This allows a robot to learn individual motion parameters for every agent in the scene at interactive rates. We highlight the performance of our motion prediction method in real-world crowded scenarios, compare its performance with prior techniques, and demonstrate the improved accuracy of the predicted trajectories. We also adapt our approach for collision-free robot navigation among pedestrians based on noisy data and highlight the results in our simulator.", "title": "" }, { "docid": "3538d14694af47dc0fb31696913da15a", "text": "Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space.\nIn this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.", "title": "" }, { "docid": "8311e231fc648a725cd643ed531aeef9", "text": "Given an image stream, our on-line algorithm will select the semantically-important images that summarize the visual experience of a mobile robot. Our approach consists of data pre-clustering using coresets followed by a graph based incremental clustering procedure using a topic based image representation. A coreset for an image stream is a set of representative images that semantically compresses the data corpus, in the sense that every frame has a similar representative image in the coreset. We prove that our algorithm efficiently computes the smallest possible coreset under natural well-defined similarity metric and up to provably small approximation factor. The output visual summary is computed via a hierarchical tree of coresets for different parts of the image stream. This allows multi-resolution summarization (or a video summary of specified duration) in the batch setting and a memory-efficient incremental summary for the streaming case.", "title": "" }, { "docid": "a747b503e597ebdb9fd1a32b9dccd04e", "text": "In this paper, we introduce KAZE features, a novel multiscale 2D feature detection and description algorithm in nonlinear scale spaces. Previous approaches detect and describe features at different scale levels by building or approximating the Gaussian scale space of an image. However, Gaussian blurring does not respect the natural boundaries of objects and smoothes to the same degree both details and noise, reducing localization accuracy and distinctiveness. In contrast, we detect and describe 2D features in a nonlinear scale space by means of nonlinear diffusion filtering. In this way, we can make blurring locally adaptive to the image data, reducing noise but retaining object boundaries, obtaining superior localization accuracy and distinctiviness. The nonlinear scale space is built using efficient Additive Operator Splitting (AOS) techniques and variable conductance diffusion. We present an extensive evaluation on benchmark datasets and a practical matching application on deformable surfaces. Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space, but comparable to SIFT, our results reveal a step forward in performance both in detection and description against previous state-of-the-art methods.", "title": "" }, { "docid": "1c0b590a687f628cb52d34a37a337576", "text": "Hexagonal torus networks are special family of Eisenstein-Jacobi (EJ) networks which have gained popularity as good candidates network On-Chip (NoC) for interconnecting Multiprocessor System-on-Chips (MPSoCs). They showed better topological properties compared to the 2D torus networks with the same number of nodes. All-to-all broadcast is a collective communication algorithm used frequently in some parallel applications. Recently, an off-chip all-to-all broadcast algorithm has been proposed for hexagonal torus networks assuming half-duplex links and all-ports communication. The proposed all-to-all broadcast algorithm does not achieve the minimum transmission time and requires 24 kextra buffers, where kis the network diameter. We first extend this work by proposing an efficient all-to-all broadcast on hexagonal torus networks under full-duplex links and all-ports communications assumptions which achieves the minimum transmission delay but requires 36 k extra buffers per router. In a second stage, we develop a new all-to-all broadcast more suitable for hexagonal torus network on-chip that achieves optimal transmission delay time without requiring any extra buffers per router. By reducing the amount of buffer space, the new all-to-all broadcast reduces the routers cost which is an important issue in NoCs architectures.", "title": "" }, { "docid": "4b284736c51435f9ab6f52f174dc7def", "text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.", "title": "" }, { "docid": "500e8ab316398313c90a0ea374f28ee8", "text": "Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community.", "title": "" }, { "docid": "264338f11dbd4d883e791af8c15aeb0d", "text": "With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.", "title": "" }, { "docid": "0dafc618dbeb04c5ee347142d915a415", "text": "Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.", "title": "" }, { "docid": "419b3914edc182e4deffd05edcabcbe8", "text": "To investigate the effects of self-presentation on the construct validity of the impression management (IM) and self-deceptive enhancement (SDE) scales of the Balanced Inventory of Social Desirable Responding Version 7 (BIDR-7), 155 participants completed the IM and SDE scales combined with standard instructions. IM and SDE were also presented with three self-presentation instructions: fake good, Agency, and Communion instructions. In addition, selfand social desirability ratings were assessed for a list of 190 personality-trait words. It could be shown that not only IM, but also SDE can be faked if participants are appropriately instructed to do so. In addition, personality-trait words related to IM were rated as socially more desirable than those related to SDE. BIDR scales were more highly related in the faking conditions than in the standard instruction condition. In addition, faked BIDR scores were not related to undistorted BIDR scores. These results implicate that both SDE and IM are susceptible to faking like any other personality questionnaire, and that both SDE and IM loose their original meaning under faking. Therefore, at least under faking social desirability scales do not seem to provide additional diagnostic information beyond that derived from personality scales. 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9581c692787cfef1ce2916100add4c1e", "text": "Diabetes related eye disease is growing as a major health concern worldwide. Diabetic retinopathy is an infirmity due to higher level of glucose in the retinal capillaries, resulting in cloudy vision and blindness eventually. With regular screening, pathology can be detected in the instigating stage and if intervened with in time medication could prevent further deterioration. This paper develops an automated diagnosis system to recognize retinal blood vessels, and pathologies, such as exudates and microaneurysms together with certain texture properties using image processing techniques. These anatomical and texture features are then fed into a multiclass support vector machine (SVM) for classifying it into normal, mild, moderate, severe and proliferative categories. Advantages include, it processes quickly a large collection of fundus images obtained from mass screening which lessens cost and increases efficiency for ophthalmologists. Our method was evaluated on two publicly available databases and got encouraging results with a state of the art in this area.", "title": "" }, { "docid": "a40d3b98ab50a5cd924be09ab1f1cc40", "text": "Feeling comfortable reading and understanding financial statements is critical to the success of healthcare executives and physicians involved in management. Businesses use three primary financial statements: a balance sheet represents the equation, Assets = Liabilities + Equity; an income statement represents the equation, Revenues - Expenses = Net Income; a statement of cash flows reports all sources and uses of cash during the represented period. The balance sheet expresses financial indicators at one particular moment in time, whereas the income statement and the statement of cash flows show activity that occurred over a stretch of time. Additional information is disclosed in attached footnotes and other supplementary materials. There are two ways to prepare financial statements. Cash-basis accounting recognizes revenue when it is received and expenses when they are paid. Accrual-basis accounting recognizes revenue when it is earned and expenses when they are incurred. Although cash-basis is acceptable, periodically using the accrual method reveals important information about receivables and liabilities that could otherwise remain hidden. Become more engaged with your financial statements by spending time reading them, tracking key performance indicators, and asking accountants and financial advisors questions. This will help you better understand your business and build a successful future.", "title": "" }, { "docid": "0ec7969da568af2e743d969f9805063d", "text": "In this letter, a notched-band Vivaldi antenna with high-frequency selectivity is designed and investigated. To obtain two notched poles inside the stopband, an open-circuited half-wavelength resonator and a short-circuited stepped impedance resonator are properly introduced into the traditional Vivaldi antenna. By theoretically calculating the resonant frequencies of the two loaded resonators, the frequency locations of the two notched poles can be precisely determined, thus achieving a wideband antenna with a desired notched band. To validate the feasibility of this new approach, a notched band antenna with a fractional bandwidth of 145.8% is fabricated and tested. Results indicate that good frequency selectivity of the notched band from 4.9 to 6.6 GHz is realized, and the antenna exhibits good impedance match, high radiation gain, and excellent radiation directivity in the passband. Both the simulation and measurement results are provided with good agreement.", "title": "" }, { "docid": "cebeaf1d155d5d7e4c62ec84cf36c087", "text": "This paper presents the comparison of power captured by vertical and horizontal axis wind turbine (VAWT and HAWT). According to Betz, the limit of maximum coefficient power (CP) is 0.59. In this case CP is important parameter that determines the power extracted by a wind turbine we made. This paper investigates the impact of wind speed variation of wind turbine to extract the power. For VAWT we used H-darrieus type whose swept area is 3.14 m2 and so is HAWT. The wind turbines have 3 blades for each type. The air foil of both wind turbines are NACA 4412. We tested the model of wind turbine with various wind velocity which affects the performance. We have found that CP of HAWT is 0.54 with captured maximum power is 1363.6 Watt while the CP of VAWT is 0.34 with captured maximum power is 505.69 Watt. The power extracted of both wind turbines seems that HAWT power is much better than VAWT power.", "title": "" }, { "docid": "e990d87c81e9c49fd45fc27afc6ebc07", "text": "PURPOSE\nThis study aimed to evaluate the effects of the subchronic consumption of energy drinks and their constituents (caffeine and taurine) in male Wistar rats using behavioural and oxidative measures.\n\n\nMETHODS\nEnergy drinks (ED 5, 7.5, and 10 mL/kg) or their constituents, caffeine (3.2 mg/kg) and taurine (40 mg/kg), either separately or in combination, were administered orally to animals for 28 days. Attention was measured though the ox-maze apparatus and the object recognition memory test. Following behavioural analyses, markers of oxidative stress, including SOD, CAT, GPx, thiol content, and free radicals, were measured in the prefrontal cortex, hippocampus, and striatum.\n\n\nRESULTS\nThe latency time to find the first reward was lower in animals that received caffeine, taurine, or a combination of both (P = 0.003; ANOVA/Bonferroni). In addition, these animals took less time to complete the ox-maze task (P = 0.0001; ANOVA/Bonferroni), and had better short-term memory (P < 0.01, Kruskal-Wallis). The ED 10 group showed improvement in the attention task, but did not differ on other measures. In addition, there was an imbalance in enzymatic markers of oxidative stress in the prefrontal cortex, the hippocampus, and the striatum. In the group that received both caffeine and taurine, there was a significant increase in the production of free radicals in the prefrontal cortex and in the hippocampus (P < 0.0001; ANOVA/Bonferroni).\n\n\nCONCLUSIONS\nExposure to a combination of caffeine and taurine improved memory and attention, and led to an imbalance in the antioxidant defence system. These results differed from those of the group that was exposed to the energy drink. This might be related to other components contained in the energy drink, such as vitamins and minerals, which may have altered the ability of caffeine and taurine to modulate memory and attention.", "title": "" }, { "docid": "af56806a30f708cb0909998266b4d8c1", "text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.", "title": "" }, { "docid": "72e4984c05e6b68b606775bbf4ce3b33", "text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F , sentences 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.", "title": "" }, { "docid": "eb971f815c884ba873685ceb5779258e", "text": "While many schools of psychotherapy have held that our early experiences with our caretakers have a powerful impact on our adult functioning, there have been plenty of hard-nosed academics and researchers who've remained unconvinced. Back in 1968, psychologist Walter Mischel created quite a stir when he challenged the concept that we even have a core personality that organizes our behavior, contending instead that situational factors are much better predictors of what we think and do. Some developmental psychologists, like Judith Rich Harris, author of The Nurture Assumption, have gone so far as to argue that the only important thing parents give their children is their genes, not their care. Others, like Jerome Kagan, have emphasized the ongoing influence of inborn temperament in shaping human experience, asserting that the effect of early experience, if any, is far more fleeting than is commonly assumed. In one memorable metaphor, Kagan likened the unfolding of life to a tape recorder with the record button always turned on and new experiences overwriting and erasing previous experiences. n At the same time, the last 50 years have seen the accumulation of studies supporting an alternative view: the idea that the emotional quality of our earliest attachment experience is perhaps the single most important influence on human development. The central figure in the birth of this school of research has been British psychiatrist and psychoanalyst John Bowlby, who challenged the Freudian view of development, claiming that it had focused too narrowly on the inner world of the child without taking into account the actual relational environment that shapes the earliest stages of human consciousness.", "title": "" }, { "docid": "cc220d8ae1fa77b9e045022bef4a6621", "text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.", "title": "" }, { "docid": "a5f557ddac63cd24a11c1490e0b4f6d4", "text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.", "title": "" } ]
scidocsrr
50099f5e41fde52e443e6551904d23b9
Exploiting self-similarity in geometry for voxel based solid modeling
[ { "docid": "91dbb5df6bc5d3db43b51fc7a4c84468", "text": "An assortment of algorithms, termed three-dimensional (3D) scan-conversion algorithms, is presented. These algorithms scan-convert 3D geometric objects into their discrete voxel-map representation within a Cubic Frame Buffer (CFB). The geometric objects that are studied here include three-dimensional lines, polygons (optionally filled), polyhedra (optionally filled), cubic parametric curves, bicubic parametric surface patches, circles (optionally filled), and quadratic objects (optionally filled) like those used in constructive solid geometry: cylinders, cones, and spheres.\nAll algorithms presented here do scan-conversion with computational complexity which is linear in the number of voxels written to the CFB. All algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops. Since the algorithms are basically sequential, the temporal complexity is also linear. However, the polyhedron-fill and sphere-fill algorithms have less than linear temporal complexity, as they use a mechanism for writing a voxel run into the CFB. The temporal complexity would then be linear with the number of pixels in the object's 2D projection. All algorithms have been implemented as part of the CUBE Architecture, which is a voxel-based system for 3D graphics. The CUBE architecture is also presented.", "title": "" }, { "docid": "1d8db3e4aada7f5125cd72df4dfab1f4", "text": "Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples.", "title": "" } ]
[ { "docid": "7959204dbaa087fc7c37e4157e057efc", "text": "OBJECTIVE\nThe primary objective of this study was to compare the effectiveness of a water flosser plus sonic toothbrush to a sonic toothbrush alone on the reduction of bleeding, gingivitis, and plaque. The secondary objective was to compare the effectiveness of different sonic toothbrushes on bleeding, gingivitis, and plaque.\n\n\nMETHODS\nOne-hundred and thirty-nine subjects completed this randomized, four-week, single-masked, parallel clinical study. Subjects were assigned to one of four groups: Waterpik Complete Care, which is a combination of a water flosser plus power toothbrush (WFS); Sensonic Professional Plus Toothbrush (SPP); Sonicare FlexCare toothbrush (SF); or an Oral-B Indicator manual toothbrush (MT). Subjects were provided written and verbal instructions for all power products at baseline, and instructions were reviewed at the two-week visit. Data were evaluated for whole mouth, facial, and lingual surfaces for bleeding on probing (BOP) and gingivitis (MGI). Plaque data were evaluated for whole mouth, lingual, facial, approximal, and marginal areas of the tooth using the Rustogi Modification of the Navy Plaque Index (RMNPI). Data were recorded at baseline (BL), two weeks (W2), and four weeks (W4).\n\n\nRESULTS\nAll groups showed a significant reduction from BL in BOP, MGI, and RMNPI for all areas measured at the W2 and W4 visits (p < 0.001). The reduction of BOP was significantly higher for the WFS group than the other three groups at W2 and W4 for all areas measured (p < 0.001 for all, except p = 0.007 at W2 and p = 0.008 for W4 lingual comparison to SPP). The WFS group was 34% more effective than the SPP group, 70% more effective than the SF group, and 1.59 times more effective than the MT group for whole mouth bleeding scores (p < 0.001) at W4. The reduction of MGI was significantly higher for the WFS group; 23% more effective than SPP, 48% more effective than SF, and 1.35 times more effective than MT for whole mouth (p <0.001) at W4. The reduction of MGI was significantly higher for WFS than the SF and MT for facial and lingual surfaces, and more effective than the SPP for facial surfaces (p < 0.001) at W4. The WFS group showed significantly better reductions for plaque than the SF and MT groups for whole mouth, facial, lingual, approximal, and marginal areas at W4 (p < 0.001; SF facial p = 0.025). For plaque reduction, the WFS was significantly better than the SPP for whole mouth (p = 0.003) and comparable for all other areas and surfaces at W4. The WFS was 52% more effective for whole mouth, 31% for facial, 77% for lingual, 1.22 times for approximal, and 1.67 times for marginal areas compared to the SF for reducing plaque scores at W4 (p < 0.001; SF facial p = 0.025). The SPP had significantly higher reductions than the SF for whole mouth and lingual BOP and MGI scores, and whole mouth, approximal, marginal, and lingual areas for plaque at W4.\n\n\nCONCLUSION\nThe Waterpik Complete Care is significantly more effective than the Sonicare FlexCare toothbrush for reducing gingival bleeding, gingivitis, and plaque. The Sensonic Professional Plus Toothbrush is significantly more effective than the Sonicare Flex-Care for reducing gingival bleeding, gingivitis, and plaque.", "title": "" }, { "docid": "e69a90ff7c2cd96a8e31cef5cb1ee2d4", "text": "Smart grids are essentially electric grids that use information and communication technology to provide reliable, efficient electricity transmission and distribution. Security and trust are of paramount importance. Among various emerging security issues, FDI attacks are one of the most substantial ones, which can significantly increase the cost of the energy distribution process. However, most current research focuses on countermeasures to FDIs for traditional power grids rather smart grid infrastructures. We propose an efficient and real-time scheme to detect FDI attacks in smart grids by exploiting spatial-temporal correlations between grid components. Through realistic simulations based on the US smart grid, we demonstrate that the proposed scheme provides an accurate and reliable solution.", "title": "" }, { "docid": "001b5a976b6b6ccb15ab80ead4617422", "text": "Multivariate time-series modeling and forecasting is an important problem with numerous applications. Traditional approaches such as VAR (vector auto-regressive) models and more recent approaches such as RNNs (recurrent neural networks) are indispensable tools in modeling time-series data. In many multivariate time series modeling problems, there is usually a significant linear dependency component, for which VARs are suitable, and a nonlinear component, for which RNNs are suitable. Modeling such times series with only VAR or only RNNs can lead to poor predictive performance or complex models with large training times. In this work, we propose a hybrid model called R2N2 (Residual RNN), which first models the time series with a simple linear model (like VAR) and then models its residual errors using RNNs. R2N2s can be trained using existing algorithms for VARs and RNNs. Through an extensive empirical evaluation on two real world datasets (aviation and climate domains), we show that R2N2 is competitive, usually better than VAR or RNN, used alone. We also show that R2N2 is faster to train as compared to an RNN, while requiring less number of hidden units.", "title": "" }, { "docid": "d21e4e55966bac19bbed84b23360b66d", "text": "Smart growth is an approach to urban planning that provides a framework for making community development decisions. Despite its growing use, it is not known whether smart growth can impact physical activity. This review utilizes existing built environment research on factors that have been used in smart growth planning to determine whether they are associated with physical activity or body mass. Searching the MEDLINE, Psycinfo and Web-of-Knowledge databases, 204 articles were identified for descriptive review, and 44 for a more in-depth review of studies that evaluated four or more smart growth planning principles. Five smart growth factors (diverse housing types, mixed land use, housing density, compact development patterns and levels of open space) were associated with increased levels of physical activity, primarily walking. Associations with other forms of physical activity were less common. Results varied by gender and method of environmental assessment. Body mass was largely unaffected. This review suggests that several features of the built environment associated with smart growth planning may promote important forms of physical activity. Future smart growth community planning could focus more directly on health, and future research should explore whether combinations or a critical mass of smart growth features is associated with better population health outcomes.", "title": "" }, { "docid": "2d0f0ebf29edc46ad68f1f6c358984db", "text": "A multilevel approach was used to analyse relationships between perceived classroom environments and emotions in mathematics. Based on Pekrun’s (2000) [A social-cognitive, control-value theory of achievement emotions. In J. Heckhausen (Ed.), Motivational psychology of human development (pp. 143e163)] social-cognitive, control-value theory of achievement emotions, we hypothesized that environmental characteristics conveying control and value to the students would be related to their experience of enjoyment, anxiety, anger, and boredom in mathematics. Multilevel modelling of data from 1623 students from 69 classes (grades 5e10) confirmed close relationships between environmental variables and emotional experiences that functioned predominantly at the individual level. Compositional effects further revealed that classes’ aggregate environment perceptions as well as their compositions in terms of aggregate achievement and gender ratio were additionally linked to students’ emotions in mathematics. Methodological and practical implications of the findings are discussed. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c638fe67f5d4b6e04a37e216edb849fa", "text": "An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.", "title": "" }, { "docid": "7a52fecf868040da5db3bd6fcbdcc0b2", "text": "Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.", "title": "" }, { "docid": "b3d4f37cbf2b277ecec7291d12f4dde5", "text": "This paper reports on the design, fabrication, assembly, as well as the optical, mechanical and thermal characterization of a novel MEMS-based optical cochlear implant (OCI). Building on advances in optogenetics, it will enable the optical stimulation of neural activity in the auditory pathway at 10 independently controlled spots. The optical stimulation of the spiral ganglion neurons (SGNs) promises a pronounced increase in the number of discernible acoustic frequency channels in comparison with commercial cochlear implants based on the electrical stimulation. Ten high-efficiency light-emitting diodes are integrated as a linear array onto an only 12-μm-thick highly flexible polyimide substrate with three metal and three polyimide layers. The high mechanical flexibility of this novel OCI enables its insertion into a 300 μm wide channel with an outer bending radius of 1 mm. The 2 cm long and only 240 μm wide OCI is electrically passivated with a thin layer of Cy-top™.", "title": "" }, { "docid": "9cf48e5fa2cee6350ac31f236696f717", "text": "Komatiites are rare ultramafic lavas that were produced most commonly during the Archean and Early Proterozoic and less frequently in the Phanerozoic. These magmas provide a record of the thermal and chemical characteristics of the upper mantle through time. The most widely cited interpretation is that komatiites were produced in a plume environment and record high mantle temperatures and deep melting pressures. The decline in their abundance from the Archean to the Phanerozoic has been interpreted as primary evidence for secular cooling (up to 500‡C) of the mantle. In the last decade new evidence from petrology, geochemistry and field investigations has reopened the question of the conditions of mantle melting preserved by komatiites. An alternative proposal has been rekindled: that komatiites are produced by hydrous melting at shallow mantle depths in a subduction environment. This alternative interpretation predicts that the Archean mantle was only slightly (V100‡C) hotter than at present and implicates subduction as a process that operated in the Archean. Many thermal evolution and chemical differentiation models of the young Earth use the plume origin of komatiites as a central theme in their model. Therefore, this controversy over the mechanism of komatiite generation has the potential to modify widely accepted views of the Archean Earth and its subsequent evolution. This paper briefly reviews some of the pros and cons of the plume and subduction zone models and recounts other hypotheses that have been proposed for komatiites. We suggest critical tests that will improve our understanding of komatiites and allow us to better integrate the story recorded in komatiites into our view of early Earth evolution. 6 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "69f72b8eadadba733f240fd652ca924e", "text": "We address the problem of finding descriptive explanations of facts stored in a knowledge graph. This is important in high-risk domains such as healthcare, intelligence, etc. where users need additional information for decision making and is especially crucial for applications that rely on automatically constructed knowledge bases where machine learned systems extract facts from an input corpus and working of the extractors is opaque to the end-user. We follow an approach inspired from information retrieval and propose a simple and efficient, yet effective solution that takes into account passage level as well as document level properties to produce a ranked list of passages describing a given input relation. We test our approach using Wikidata as the knowledge base and Wikipedia as the source corpus and report results of user studies conducted to study the effectiveness of our proposed model.", "title": "" }, { "docid": "63de2448edead6e16ef2bc86c3acd77b", "text": "In traditional topic models such as LDA, a word is generated by choosing a topic from a collection. However, existing topic models do not identify different types of topics in a document, such as topics that represent the content and topics that represent the sentiment. In this paper, our goal is to discover such different types of topics, if they exist. We represent our model as several parallel topic models (called topic factors), where each word is generated from topics from these factors jointly. Since the latent membership of the word is now a vector, the learning algorithms become challenging. We show that using a variational approximation still allows us to keep the algorithm tractable. Our experiments over several datasets show that our approach consistently outperforms many classic topic models while also discovering fewer, more meaningful, topics. 1", "title": "" }, { "docid": "db9ab90f56a5762ebf6729ffc802a02a", "text": "In this paper we present a novel approach to music analysis, in which a grammar is automatically generated explaining a musical work’s structure. The proposed method is predicated on the hypothesis that the shortest possible grammar provides a model of the musical structure which is a good representation of the composer’s intent. The effectiveness of our approach is demonstrated by comparison of the results with previously-published expert analysis; our automated approach produces results comparable to human annotation. We also illustrate the power of our approach by showing that it is able to locate errors in scores, such as introduced by OMR or human transcription. Further, our approach provides a novel mechanism for intuitive high-level editing and creative transformation of music. A wide range of other possible applications exists, including automatic summarization and simplification; estimation of musical complexity and similarity, and plagiarism detection.", "title": "" }, { "docid": "0d5fd1dfdcb6beda733eb43f2ed834ea", "text": "In this paper, approximation techniques based on the shifted Jacobi together with spectral tau technique are presented to solve a class of initial-boundary value problems for the fractional diffusion equations with variable coefficients on a finite domain. The fractional derivatives are described in the Caputo sense. The technique is derived by expanding the required approximate solution as the elements of shifted Jacobi polynomials. Using the operational matrix of the fractional derivative, the problem can be reduced to a set of linear algebraic equations. Numerical examples are included to demonstrate the validity and applicability of the technique and a comparison is made with the existing results to show that the proposed method is easy to implement and produce accurate results.", "title": "" }, { "docid": "426a7c1572e9d68f4ed2429f143387d5", "text": "Face tracking is an active area of computer vision research and an important building block for many applications. However, opposed to face detection, there is no common benchmark data set to evaluate a tracker’s performance, making it hard to compare results between different approaches. In this challenge we propose a data set, annotation guidelines and a well defined evaluation protocol in order to facilitate the evaluation of face tracking systems in the future.", "title": "" }, { "docid": "261318ee599b56b005a5581bd33938b9", "text": "This paper reports on a study of the prevalence of and possible reasons for peer-to-peer transaction marketplace (P2PM) users turning to out-of-market (OOM) transactions after finding transaction partners within a P2P system. We surveyed 97 P2PM users and interviewed 22 of 58 who reported going OOM. We did not find any evidence of predisposing personality factors for OOM activity; instead, it seems to be a rational response to circumstances, with a variety of situationally rational motivations at play, such as liking the transaction partner and trusting that good quality repeat transactions will occur in the future.", "title": "" }, { "docid": "4c3d8c30223ef63b54f8c7ba3bd061ed", "text": "There is much recent work on using the digital footprints left by people on social media to predict personal traits and gain a deeper understanding of individuals. Due to the veracity of social media, imperfections in prediction algorithms, and the sensitive nature of one's personal traits, much research is still needed to better understand the effectiveness of this line of work, including users' preferences of sharing their computationally derived traits. In this paper, we report a two- part study involving 256 participants, which (1) examines the feasibility and effectiveness of automatically deriving three types of personality traits from Twitter, including Big 5 personality, basic human values, and fundamental needs, and (2) investigates users' opinions of using and sharing these traits. Our findings show there is a potential feasibility of automatically deriving one's personality traits from social media with various factors impacting the accuracy of models. The results also indicate over 61.5% users are willing to share their derived traits in the workplace and that a number of factors significantly influence their sharing preferences. Since our findings demonstrate the feasibility of automatically inferring a user's personal traits from social media, we discuss their implications for designing a new generation of privacy-preserving, hyper-personalized systems.", "title": "" }, { "docid": "e24743e3a183ebd20d5d3cfd2b3b3235", "text": "This new book by Andrew Cohen comes in the well-established series, Applied Linguistics and Language Study, which explores key issues in language acquisition and language use. Cohen’s book focuses on learner strategies and is written primarily for teachers, administrators, and researchers of second and foreign language programmes. It is hard to think of a more suitable author of a book on how to go about the complex endeavour of learning a second or foreign language than Cohen, himself a learner of twelve languages and a continuous user of seven! Of course, Cohen is also an experienced conductor of research focusing on learner strategies and the author and co-author of numerous articles on the topic. Except for a research report on strategies-based instruction which appears in print for the first time in Chapter 5, all of the chapters in the present volume consist of previously published material, either by Cohen alone or co-authored with Cohen, which has been revised and updated. After a short introduction, Cohen starts out with a discussion of terminology in Chapter 2, suggesting the broad working definition of second language learner strategies to encompass both second language learning and second language use strategies. According to Cohen, second language learner strategies can be defined:", "title": "" }, { "docid": "ea75bf062f21a12aacd88ccb61ba47a0", "text": "This paper describes a Twitter sentiment analysis system that classifies a tweet as positive or negative based on its overall tweet-level polarity. Supervised learning classifiers often misclassify tweets containing conjunctions such as “but” and conditionals such as “if”, due to their special linguistic characteristics. These classifiers also assign a decision score very close to the decision boundary for a large number tweets, which suggests that they are simply unsure instead of being completely wrong about these tweets. To counter these two challenges, this paper proposes a system that enhances supervised learning for polarity classification by leveraging on linguistic rules and sentic computing resources. The proposed method is evaluated on two publicly available Twitter corpora to illustrate its effectiveness.", "title": "" }, { "docid": "10fa3df6bc00cb1165d4ef07d6e2f85c", "text": "We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show that this representation is beneficial throughout the view synthesis pipeline. During view synthesis, it provides a soft model of scene geometry that provides continuity across synthesized views and robustness to depth uncertainty. During 3D reconstruction, the same robust estimates of scene visibility can be applied iteratively to improve depth estimation around object edges. Our algorithm is based entirely on O(1) filters, making it conducive to acceleration and it works with structured or unstructured sets of input views. We compare with recent classical and learning-based algorithms on plenoptic lightfields, wide baseline captures, and lightfield videos produced from camera arrays.", "title": "" }, { "docid": "e18ddc1b569a6f39ee5cbf133738a2a1", "text": "Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary— a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout’s discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.", "title": "" } ]
scidocsrr
c451fd0cb6ff3f6f33922eb71fe4b875
The Post Adoption Switching Of Social Network Service: A Human Migratory Model
[ { "docid": "1c0efa706f999ee0129d21acbd0ef5ab", "text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN", "title": "" }, { "docid": "9948738a487ed899ec50ac292e1f9c6d", "text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.", "title": "" }, { "docid": "013bf71ab18747afefa07cbe6ae6d477", "text": "Mobile commerce is becoming increasingly important in business. This trend is particularly evident in the service industry. To cope with this demand, various platforms have been proposed to provide effective mobile commerce solutions. Among these solutions, wireless application protocol (WAP) is one of the most widespread technical standards for mobile commerce. Following continuous technical evolution, WAP has come to include various new features. However, WAP services continue to struggle for market share. Hence, understanding WAP service adoption is increasingly important for enterprises interested in developing mobile commerce. This study aims to (1) identify the critical factors of WAP service adoption; (2) explore the relative importance of each factor for users who adopt WAP and those who do not; (3) examine the causal relationships among variables on WAP service adoption behavior. This study conducts an empirical test of WAP service adoption in Taiwan, based on theory of planned behavior (TPB) and innovation diffusion theory (IDT). The results help clarify the critical factors influences on WAP service adoption in the Greater China economic region. The Greater China economic region is a rapidly growing market. Many western telecommunication enterprises are strongly interested in providing wireless services in Shanghai, Singapore, Hong Kong and Taipei. Since these cities share a similar culture and the same language, the analytical results and conclusions of this study may be a good reference for global telecommunication enterprises to establish the developing strategy for their eastern branches. From the analysis conducted in this study, the critical factors for influences on WAP service adoption include connection speed, service cost, user satisfaction, personal innovativeness, ease of use, peer influence, and facilitating condition. Therefore, this study proposes that strategies for marketing WAP services in the Greater China economic region can pay increased attention to these factors. Notably, this study also provides some suggestion for subsequent researchers and practitioners seeking to understand WAP service adoption behavior.", "title": "" } ]
[ { "docid": "11f84f99de269ca5ca43fc6d761504b7", "text": "Effective use of distributed collaboration environments requires shared mental models that guide users in sensemaking and categorization. In Lotus Notes -based collaboration systems, such shared models are usually implemented as views and document types. TeamRoom, developed at Lotus Institute, implements in its design a theory of effective social process that creates a set of team-specific categories, which can then be used as a basis for knowledge sharing, collaboration, and team memory. This paper reports an exploratory study in collective concept formation in the TeamRoom environment. The study was run in an ecological setting, while the team members used the system for their everyday work. We apply theory developed by Lev Vygotsky, and use a modified version of an experiment on concept formation, devised by Lev Sakharov, and discussed in Vygotsky (1986). Vygotsky emphasized the role of language, cognitive artifacts, and historical and social sources in the development of thought processes. Within the Vygotskian framework it becomes clear that development of thinking does not end in adolescence. In teams of adult people, learning and knowledge creation are continuous processes. New concepts are created, shared, and developed into systems. The question, then, becomes how spontaneous concepts are collectively generated in teams, how they become integrated as systems, and how computer mediated collaboration environments affect these processes. d in ittle ons", "title": "" }, { "docid": "bf7bc12a4f5cbac481c8a0a4e92854b9", "text": "Recurrent neural networks (RNN), especially the ones requiring extremely long term memories, are difficult to training. Hence, they provide an ideal testbed for benchmarking the performance of optimization algorithms. This paper reports test results of a recently proposed preconditioned stochastic gradient descent (PSGD) algorithm on RNN training. We find that PSGD may outperform Hessian-free optimization which achieves the state-of-the-art performance on the target problems, although it is only slightly more complicated than stochastic gradient descent (SGD) and is user friendly, virtually a tuning free algorithm.", "title": "" }, { "docid": "4762cbac8a7e941f26bce8217cf29060", "text": "The 2-D maximum entropy method not only considers the distribution of the gray information, but also takes advantage of the spatial neighbor information with using the 2-D histogram of the image. As a global threshold method, it often gets ideal segmentation results even when the image s signal noise ratio (SNR) is low. However, its time-consuming computation is often an obstacle in real time application systems. In this paper, the image thresholding approach based on the index of entropy maximization of the 2-D grayscale histogram is proposed to deal with infrared image. The threshold vector (t, s), where t is a threshold for pixel intensity and s is another threshold for the local average intensity of pixels, is obtained through a new optimization algorithm, namely, the particle swarm optimization (PSO) algorithm. PSO algorithm is realized successfully in the process of solving the 2-D maximum entropy problem. The experiments of segmenting the infrared images are illustrated to show that the proposed method can get ideal segmentation result with less computation cost. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "721ff703dfafad6b1b330226c36ed641", "text": "In the Narrowband Internet-of-Things (NB-IoT) LTE systems, the device shall be able to blindly lock to a cell within 200-KHz bandwidth and with only one receive antenna. In addition, the device is required to setup a call at a signal-to-noise ratio (SNR) of −12.6 dB in the extended coverage mode. A new set of synchronization signals have been introduced to provide data-aided synchronization and cell search. In this letter, we present a procedure for NB-IoT cell search and initial synchronization subject to the new challenges given the new specifications. Simulation results show that this method not only provides the required performance at very low SNRs, but also can be quickly camped on a cell, if any.", "title": "" }, { "docid": "9d5de7a0330d8bba49eb8d73597473b9", "text": "Web crawlers are highly automated and seldom regulated manually. The diversity of crawler activities often leads to ethical problems such as spam and service attacks. In this research, quantitative models are proposed to measure the web crawler ethics based on their behaviors on web servers. We investigate and define rules to measure crawler ethics, referring to the extent to which web crawlers respect the regulations set forth in robots.txt configuration files. We propose a vector space model to represent crawler behavior and measure the ethics of web crawlers based on the behavior vectors. The results show that ethicality scores vary significantly among crawlers. Most commercial web crawlers' behaviors are ethical. However, many commercial crawlers still consistently violate or misinterpret certain robots.txt rules. We also measure the ethics of big search engine crawlers in terms of return on investment. The results show that Google has a higher score than other search engines for a US website but has a lower score than Baidu for Chinese websites.", "title": "" }, { "docid": "766c723d00ac15bf31332c8ab4b89b63", "text": "For those people without artistic talent, they can only draw rough or even awful doodles to express their ideas. We propose a doodle beautification system named Doodle Master, which can transfer a rough doodle to a plausible image and also keep the semantic concepts of the drawings. The Doodle Master applies the VAE/GAN model to decode and generate the beautified result from a constrained latent space. To achieve better performance for sketch data which is more like discrete distribution, a shared-weight method is proposed to improve the learnt features of the discriminator with the aid of the encoder. Furthermore, we design an interface for the user to draw with basic drawing tools and adjust the number of reconstruction times. The experiments show that the proposed Doodle Master system can successfully beautify the rough doodle or sketch in real-time.", "title": "" }, { "docid": "c337226d663e69ecde67ff6f35ba7654", "text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.", "title": "" }, { "docid": "b5b4e637065ba7c0c18a821bef375aea", "text": "The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research.", "title": "" }, { "docid": "bf44cc7e8e664f930edabf20ca06dd29", "text": "Nowadays, our living environment is rich in radio-frequency energy suitable for harvesting. This energy can be used for supplying low-power consumption devices. In this paper, we analyze a new type of a Koch-like antenna which was designed for energy harvesting specifically. The designed antenna covers two different frequency bands (GSM 900 and Wi-Fi). Functionality of the antenna is verified by simulations and measurements.", "title": "" }, { "docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2", "text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).", "title": "" }, { "docid": "6097315ac2e4475e8afd8919d390babf", "text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.", "title": "" }, { "docid": "9d089af812c0fdd245a218362d88b62a", "text": "Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI - how should spectators experience a performer's interaction with a computer? We classify public interfaces (including examples from art, performance and exhibition design) according to the extent to which a performer's manipulations of an interface and their resulting effects are hidden, partially revealed, fully revealed or even amplified for spectators. Our taxonomy uncovers four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they tend to be revealed enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent but effects are only revealed as the spectator takes their turn.", "title": "" }, { "docid": "61615f5aefb0aa6de2dd1ab207a966d5", "text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.", "title": "" }, { "docid": "4d297680cd342f46a5a706c4969273b8", "text": "Theory on passwords has lagged practice, where large providers use back-end smarts to survive with imperfect technology.", "title": "" }, { "docid": "88a21d973ec80ee676695c95f6b20545", "text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "title": "" }, { "docid": "30520912723d67f7d07881aa33cdf229", "text": "OBJECTIVE\nA study to examine the incidence and characteristics of concussions among Canadian university athletes during 1 full year of football and soccer participation.\n\n\nDESIGN\nRetrospective survey.\n\n\nPARTICIPANTS\nThree hundred eighty Canadian university football and 240 Canadian university soccer players reporting to 1999 fall training camp. Of these, 328 football and 201 soccer players returned a completed questionnaire.\n\n\nMAIN OUTCOME MEASURES\nBased on self-reported symptoms, calculations were made to determine the number of concussions experienced during the previous full year of football or soccer participation, the duration of symptoms, the time for return to play, and any associated risk factors for concussions.\n\n\nRESULTS\nOf all the athletes who returned completed questionnaires, 70.4% of the football players and 62.7% of the soccer players had experienced symptoms of a concussion during the previous year. Only 23.4% of the concussed football players and 19.8% of the concussed soccer players realized they had suffered a concussion. More than one concussion was experienced by 84.6% of the concussed football players and 81.7% of the concussed soccer players. Examining symptom duration, 27.6% of all concussed football players and 18.8% of all concussed soccer players experienced symptoms for at least 1 day or longer. Tight end and defensive lineman were the positions most commonly affected in football, while goalies were the players most commonly affected in soccer. Variables that increased the odds of suffering a concussion during the previous year for football players included a history of a traumatic loss of consciousness or a recognized concussion in the past. Variables that increased the odds of suffering a concussion during the previous year for soccer players included a past history of a recognized concussion while playing soccer and being female.\n\n\nCONCLUSIONS\nUniversity football and soccer players seem to be experiencing a significant amount of concussions while participating in their respective sports. Variables that seem to increase the odds of suffering a concussion during the previous year for football and soccer players include a history of a recognized concussion. Despite being relatively common, symptoms of concussion may not be recognized by many players.", "title": "" }, { "docid": "28d8cad6fda1f1345b9905e71495e745", "text": "To provide COSMOS, a dynamic model baaed manipulator control system, with an improved dynamic model, a PUMA 560 arm waa diaaaaembled; the inertial propertiea of the individual links were meaaured; and an ezplicit model incorporating all ofthe non-zero meaaured parametera waa deriued. The ezplicit model of the PUMA arm has been obtained with a derivation procedure comprised of aeveral heuristic rulea for simplification. A aimplijied model, abbreviated from the full ezplicit model with a 1% aignijicance criterion, can be evaluated with 305 calculationa, one fifth the number required by the recuraive Newton-Euler method. The procedure used to derive the model i a laid out; the meaaured inertial parametera are preaented, and the model ia included in an appendiz.", "title": "" }, { "docid": "ff6a487e49d1fed033ad082ad7cd0524", "text": "We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.", "title": "" }, { "docid": "5063adc5020cacddb5a4c6fd192fc17e", "text": "In this paper, A Novel 1 to 4 modified Wilkinson power divider operating over the frequency range of (3 GHz to 8 GHz) is proposed. The design perception of the proposed divider based on two different stages and printed on FR4 (Epoxy laminate material) with the thickness of 1.57mm and єr =4.3 respectively. The modified design of this power divider including curved corners instead of the sharp edges and some modification in the length of matching stubs. In addition, this paper contain the power divider with equal power split at all ports, reasonable insertion loss, acceptable return loss below −10 dB, good impedance matching at all ports and satisfactory isolation performance has been obtained over the mentioned frequency range. The design concept and optimization development is practicable through CST simulation software.", "title": "" } ]
scidocsrr
09d2fd2ee581d160a029aa138efd5d59
A secure distributed framework for achieving k-anonymity
[ { "docid": "21a356afff7c7b31895a3c11c2231d28", "text": "There has been concern over the apparent conflict between privacy and data mining. There is no inherent conflict, as most types of data mining produce summary results that do not reveal information about individuals. The process of data mining may use private data, leading to the potential for privacy breaches. Secure Multiparty Computation shows that results can be produced without revealing the data used to generate them. The problem is that general techniques for secure multiparty computation do not scale to data-mining size computations. This paper presents an efficient protocol for securely determining the size of set intersection, and shows how this can be used to generate association rules where multiple parties have different (and private) information about the same set of individuals.", "title": "" }, { "docid": "83f59014cebd1f0fb65d76b7239194e1", "text": "The increase in volume and sensitivity of data communicated and processed over the Internet has been accompanied by a corresponding need for e-commerce techniques in which entities can participate in a secure and anonymous fashion. Even simple arithmetic operations over a set of integers partitioned over a network require sophisticated algorithms. As a part of our earlier work, we have developed a secure protocol for computing dot products of two vectors. In this paper,we present a secure protocol for Yao’s millionaires’ problem. In this problem, each of the two participating parties have a number and the objective is to determine whose number is larger without disclosing any information about the numbers. This problem has direct applications in on-line bidding and auctions. Furthermore, combined with a secure dot-product, a solution to this secure multiparty computation provides necessary building blocks for such basic operations as frequent item-set generation in association rule mining. Although an asymptotically optimal solution for the secure multiparty computation of the ‘less-or-equal’ predicate exists in literature, this protocol is not suited for practical applications. Here, we present a protocol which has a much simpler structure and is more efficient for numbers in ranges practically encountered in typical ecommerce applications. Furthermore, advances in cryptanalysis and the subsequent increase in key lengths for public-key cryptographic systems accentuate the advantage of the proposed protocol. We present experimental evidence demonstrating the efficiency of the proposed protocol both in terms of time and communication overhead.", "title": "" } ]
[ { "docid": "a827d89c56521de7dff8a59039c52181", "text": "A set of tools is being prepared in the frame of ESA activity [18191/04/NL] labelled: \"Mars Rover Chassis Evaluation Tools\" to support design, selection and optimisation of space exploration rovers in Europe. This activity is carried out jointly by Contraves Space as prime contractor, EPFL, DLR, Surrey Space Centre and EADS Space Transportation. This paper describes the current results of this study and its intended used for selection, design and optimisation on different wheeled vehicles. These tools would also allow future developments for a more efficient motion control on rover. INTRODUCTION AND MOTIVATION A set of tools is being developed to support the design of planetary rovers in Europe. The RCET will enable accurate predictions and characterisations of rover performances as related to the locomotion subsystem. This infrastructure consists of both S/W and H/W elements that will be interwoven to result in a user-friendly environment. The actual need for mobility increased in terms of range and duration. In this respect, redesigning specific aspects of the past rover concepts, in particular the development of most suitable all terrain performances is appropriate [9]. Analysis and design methodologies for terrestrial surface vehicles to operate on unprepared surfaces have been successfully applied to planet rover developments for the first time during the Apollo LRV manned lunar rover programme of the late 1960’s and early 1970’s [1,2]. Key to this accomplishment and to rational surface vehicle designs in general are quantitative descriptions of the terrain and of the interaction between the terrain and the vehicle. Not only the wheel/ground interaction is essential for efficient locomotion, but also the rover kinematics concepts. In recent terrestrial off-the-road vehicle development and acquisition, especially in the military, the so-called ‘Virtual Proving Ground’ (VPG) Simulation Technology has become essential. The integrated environments previously available to design engineers involved sophisticated hardware and software and cost hundreds of thousands of Euros. The experimentation and operational costs associated with the use of such instruments were even more alarming. The promise of VPG is to lower the risk and cost in vehicle definition and design by allowing early concept characterisation and trade-off’s based on numerical models without having to rely on prototyping for concept assessment. A similar approach is proposed for future European planetary rover programmes and is to be enabled by RCET. The first part of this paper describes the methodology used in the RCET activity and gives an overview of the different tools under development. The next section details the theory and modules used for the simulation. Finally the last section relates the first results, the future work and concludes this paper. In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2 4, 2004", "title": "" }, { "docid": "9b62633b700a275ae25dd49bc1e459a0", "text": "We describe a new supervised machine learning approach for detecting authorship deception, a specific type of authorship attribution task particularly relevant for cybercrime forensic investigations, and demonstrate its validity on two case studies drawn from realistic online data sets. The core of our approach involves identifying uncharacteristic behavior for an author, based on a writeprint extracted from unstructured text samples of the author’s writing. The writeprints used here involve stylometric features and content features derived from topic models, an unsupervised approach for identifying relevant keywords that relate to the content areas of a document. One innovation of our approach is to transform the writeprint feature values into a representation that individually balances characteristic and uncharacteristic traits of an author, and we subsequently apply a Sparse Multinomial Logistic Regression classifier to this novel representation. Our method yields high accuracy for authorship deception detection on the two case studies, confirming its utility. .................................................................................................................................................................................", "title": "" }, { "docid": "e1b9795030dac51172c20a49113fac23", "text": "Bin packing problems are a class of optimization problems that have numerous applications in the industrial world, ranging from efficient cutting of material to packing various items in a larger container. We consider here only rectangular items cut off an infinite strip of material as well as off larger sheets of fixed dimensions. This problem has been around for many years and a great number of publications can be found on the subject. Nevertheless, it is often difficult to reconcile a theoretical paper and practical application of it. The present work aims to create simple but, at the same time, fast and efficient algorithms, which would allow one to write high-speed and capable software that can be used in a real-time application.", "title": "" }, { "docid": "57c705e710f99accab3d9242fddc5ac8", "text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.", "title": "" }, { "docid": "c5bbdfc0da1635ad0a007e60e224962f", "text": "Natural gradient descent is an optimization method traditionally motivated from the perspective of information geometry, and works well for many applications as an alternative to stochastic gradient descent. In this paper we critically analyze this method and its properties, and show how it can be viewed as a type of approximate 2nd-order optimization method, where the Fisher information matrix used to compute the natural gradient direction can be viewed as an approximation of the Hessian. This perspective turns out to have significant implications for how to design a practical and robust version of the method. Among our various other contributions is a thorough analysis of the convergence speed of natural gradient descent and more general stochastic methods, a critical examination of the oft-used “empirical” approximation of the Fisher matrix, and an analysis of the (approximate) parameterization invariance property possessed by the method, which we show still holds for certain other choices of the curvature matrix, but notably not the Hessian. ∗jmartens@cs.toronto.edu 1 ar X iv :1 41 2. 11 93 v5 [ cs .L G ] 1 O ct 2 01 5", "title": "" }, { "docid": "db7a4ab8d233119806e7edf2a34fffd1", "text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.", "title": "" }, { "docid": "ba29af46fd410829c450eed631aa9280", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "7a6876aa158c9bc717bd77319f4d2494", "text": "Scripts encode knowledge of prototypical sequences of events. We describe a Recurrent Neural Network model for statistical script learning using Long Short-Term Memory, an architecture which has been demonstrated to work well on a range of Artificial Intelligence tasks. We evaluate our system on two tasks, inferring held-out events from text and inferring novel events from text, substantially outperforming prior approaches on both tasks.", "title": "" }, { "docid": "1eaad8b6a2bde878373f37fe7e67b48c", "text": "Speech separation can be formulated as a classification problem. In classification-based speech separation, supervised learning is employed to classify time-frequency units as either speech-dominant or noise-dominant. In very low signal-to-noise ratio (SNR) conditions, acoustic features extracted from a mixture are crucial for correct classification. In this study, we systematically evaluate a range of promising features for classification-based separation using six nonstationary noises at the low SNR level of -5 dB, which is chosen with the goal of improving human speech intelligibility in mind. In addition, we propose a new feature called multi-resolution cochleagram (MRCG). The new feature is constructed by combining four cochleagrams at different spectrotemporal resolutions in order to capture both the local and contextual information. Experimental results show that MRCG gives the best classification results among all evaluated features. In addition, our results indicate that auto-regressive moving average (ARMA) filtering, a post-processing technique for improving automatic speech recognition features, also improves many acoustic features for speech separation.", "title": "" }, { "docid": "ea1f836ba53e49663d5b7f480a2f8772", "text": "Strengths and weaknesses of modern widebandwidth bipolar transistor operational amplifiers are investigated and compared with respect to bandwidth, slew rate, noise, distortion, and power. This paper traces the evolution of operational amplifier designs since vacuum tube days to give a perspective of the large number of circuit variations used over time. Of particular value is the ability to use many of these circuit design options as the basis of new amplifiers. In addition, an array of operational amplifier components fabricated on the AT&T CBIC V2 [1] process is described. This design incorporates many of the architectural techniques that Vin have evolved over the years to produce four separate operational amplifier on a single base wafer. The process design methodology requires identifying the common elements in each architecture and the minimum number of additional components required to implement four unique architectures on the array. +V", "title": "" }, { "docid": "645e69205aea3887d954f825306a1052", "text": "Continuous outlier detection in data streams has important applications in fraud detection, network security, and public health. The arrival and departure of data objects in a streaming manner impose new challenges for outlier detection algorithms, especially in time and space efficiency. In the past decade, several studies have been performed to address the problem of distance-based outlier detection in data streams (DODDS), which adopts an unsupervised definition and does not have any distributional assumptions on data values. Our work is motivated by the lack of comparative evaluation among the state-of-the-art algorithms using the same datasets on the same platform. We systematically evaluate the most recent algorithms for DODDS under various stream settings and outlier rates. Our extensive results show that in most settings, the MCOD algorithm offers the superior performance among all the algorithms, including the most recent algorithm Thresh LEAP.", "title": "" }, { "docid": "a0d6536cd8c85fe87cb316f92b489d32", "text": "As a design of information-centric network architecture, Named Data Networking (NDN) provides content-based security. The signature binding the name with the content is the key point of content-based security in NDN. However, signing a content will introduce a significant computation overhead, especially for dynamically generated content. Adversaries can take advantages of such computation overhead to deplete the resources of the content provider. In this paper, we propose Interest Cash, an application-based countermeasure against Interest Flooding for dynamic content. Interest Cash requires a content consumer to solve a puzzle before it sends an Interest. The content consumer should provide a solution to this puzzle as cash to get the signing service from the content provider. The experiment shows that an adversary has to use more than 300 times computation resources of the content provider to commit a successful attack when Interest Cash is used.", "title": "" }, { "docid": "3a2740b7f65841f7eb4f74a1fb3c9b65", "text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.", "title": "" }, { "docid": "4138f62dfaefe49dd974379561fb6fea", "text": "For a set of 1D vectors, standard singular value decomposition (SVD) is frequently applied. For a set of 2D objects such as images or weather maps, we form 2DSVD, which computes principal eigenvectors of rowrow and column-column covariance matrices, exactly as in the standard SVD. We study optimality properties of 2DSVD as low-rank approximation and show that it provides a framework unifying two recent approaches. Experiments on images and weather maps illustrate the usefulness of 2DSVD.", "title": "" }, { "docid": "41c3505d1341247972d99319cba3e7ba", "text": "A 32-year-old pregnant woman in the 25th week of pregnancy underwent oral glucose tolerance screening at the diabetologist's. Later that day, she was found dead in her apartment possibly poisoned with Chlumsky disinfectant solution (solutio phenoli camphorata). An autopsy revealed chemical burns in the digestive system. The lungs and the brain showed signs of severe edema. The blood of the woman and fetus was analyzed using gas chromatography with mass spectrometry and revealed phenol, its metabolites (phenyl glucuronide and phenyl sulfate) and camphor. No ethanol was found in the blood samples. Both phenol and camphor are contained in Chlumsky disinfectant solution, which is used for disinfecting surgical equipment in healthcare facilities. Further investigation revealed that the deceased woman had been accidentally administered a disinfectant instead of a glucose solution by the nurse, which resulted in acute intoxication followed by the death of the pregnant woman and the fetus.", "title": "" }, { "docid": "8fa61b7d1844eee81d1e02b12b654b16", "text": "Time series are ubiquitous, and a measure to assess their similarity is a core part of many computational systems. In particular, the similarity measure is the most essential ingredient of time series clustering and classification systems. Because of this importance, countless approaches to estimate time series similarity have been proposed. However, there is a lack of comparative studies using empirical, rigorous, quantitative, and large-scale assessment strategies. In this article, we provide an extensive evaluation of similarity measures for time series classification following the aforementioned principles. We consider 7 different measures coming from alternative measure ‘families’, and 45 publicly-available time series data sets coming from a wide variety of scientific domains. We focus on out-of-sample classification accuracy, but in-sample accuracies and parameter choices are also discussed. Our work is based on rigorous evaluation methodologies and includes the use of powerful statistical significance tests to derive meaningful conclusions. The obtained results show the equivalence, in terms of accuracy, of a number of measures, but with one single candidate outperforming the rest. Such findings, together with the followed methodology, invite researchers on the field to adopt a more consistent evaluation criteria and a more informed decision regarding the baseline measures to which new developments should be compared.", "title": "" }, { "docid": "e983898bf746ecb5ea8590f3d3beb337", "text": "The concept of Bitcoin was first introduced by an unknown individual (or a group of people) named Satoshi Nakamoto before it was released as open-source software in 2009. Bitcoin is a peer-to-peer cryptocurrency and a decentralized worldwide payment system for digital currency where transactions take place among users without any intermediary. Bitcoin transactions are performed and verified by network nodes and then registered in a public ledger called blockchain, which is maintained by network entities running Bitcoin software. To date, this cryptocurrency is worth close to U.S. $150 billion and widely traded across the world. However, as Bitcoin’s popularity grows, many security concerns are coming to the forefront. Overall, Bitcoin security inevitably depends upon the distributed protocols-based stimulant-compatible proof-of-work that is being run by network entities called miners, who are anticipated to primarily maintain the blockchain (ledger). As a result, many researchers are exploring new threats to the entire system, introducing new countermeasures, and therefore anticipating new security trends. In this survey paper, we conduct an intensive study that explores key security concerns. We first start by presenting a global overview of the Bitcoin protocol as well as its major components. Next, we detail the existing threats and weaknesses of the Bitcoin system and its main technologies including the blockchain protocol. Last, we discuss current existing security studies and solutions and summarize open research challenges and trends for future research in Bitcoin security.", "title": "" }, { "docid": "fcd98a7540dd59e74ea71b589c255adb", "text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.", "title": "" }, { "docid": "038c10660f6dcd354dd54027bd9e65eb", "text": "A new architecture for a very fast and secure public key crypto-coprocessor Crypto@1408Bit usable in Smart Card ICs is presented. The key elements of Crypto@1408Bit architecture are a very fast Look Ahead Algorithm for modular multiplication, a very fast and secure serial-parallel adder, a fast and chip area efficient carry handling and a sufficient number of working registers enabling easy programming. With this architecture a new dimension of crypto performance and security against side channel attacks is achieved. Compared to crypto-coprocessors currently available on the Smart Card IC market Crypto@1408Bit offers a performance more than an order of magnitude faster. The security of the crypto-coprocessor relies on hardware and software security features like dual-rail-security logic against differential power attacks, high secure registers for critical operands and an register length with up to 128 Bit buffer for randomization of operands.", "title": "" }, { "docid": "9a3f49d9c8ac513124e75b59f5547a78", "text": "359 Abstract— Goniometry has been widely used to analyze human motion. The goniometer is a tool to measure the angular change on systems of a single degree of freedom. However, it is inappropriate to detect movements with multiple degrees of freedom. Kinovea is a free software application for the analysis, comparison and evaluation of movement. Generally, used to evaluate the progress of an athlete in training. Many studies in the literature have proposed solutions for measuring combined movements, especially in lower limbs. In this paper, we discuss the possibility to use Kinovea in rehabilitation movements for lower limbs. We used a webcam to record the movement of patient's leg. The detection and analysis was carry out using Kinovea with position markers to measure angular positions of lower limbs. To find the angle of the hip and knee, a mathematical model based on a robot of two degrees of freedom was proposed. The results of position, velocity and acceleration for ankle and knee was presented in a XY plane. In addition, the angular measure of hip and knee was obtained using the inverse kinematics of a 2RR robot.", "title": "" } ]
scidocsrr
c7324bf9c0aba75b8812869ace2e6518
Online Semi-Supervised Learning with Deep Hybrid Boltzmann Machines and Denoising Autoencoders
[ { "docid": "b408788cd974438f32c1858cda9ff910", "text": "Speaking as someone who has personally felt the influence of the “Chomskian Turn”, I believe that one of Chomsky’s most significant contributions to Psychology, or as it is now called, Cognitive Science was to bring back scientific realism. This may strike you as a very odd claim, for one does not usually think of science as needing to be talked into scientific realism. Science is, after all, the study of reality by the most precise instruments of measurement and analysis that humans have developed.", "title": "" }, { "docid": "ef04d580d7c1ab165335145c13a1701f", "text": "Finding good representations of text documents is crucial in information retrieval and classification systems. Today the most popular document representation is based on a vector of word counts in the document. This representation neither captures dependencies between related words, nor handles synonyms or polysemous words. In this paper, we propose an algorithm to learn text document representations based on semi-supervised autoencoders that are stacked to form a deep network. The model can be trained efficiently on partially labeled corpora, producing very compact representations of documents, while retaining as much class information and joint word statistics as possible. We show that it is advantageous to exploit even a few labeled samples during training.", "title": "" }, { "docid": "6eeeb343309fc24326ed42b62d5524b1", "text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.", "title": "" } ]
[ { "docid": "381ce2a247bfef93c67a3c3937a29b5a", "text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.", "title": "" }, { "docid": "6b1e67c1768f9ec7a6ab95a9369b92d1", "text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.", "title": "" }, { "docid": "2292c60d69c94f31c2831c2f21c327d8", "text": "With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis.", "title": "" }, { "docid": "cd4e04370b1e8b1f190a3533c3f4afe2", "text": "Perception of depth is a central problem m machine vision. Stereo is an attractive technique for depth perception because, compared with monocular techniques, it leads to more direct, unambiguous, and quantitative depth measurements, and unlike \"active\" approaches such as radar and laser ranging, it is suitable in almost all application domains. Computational stereo is broadly defined as the recovery of the three-dimensional characteristics of a scene from multiple images taken from different points of view. First, each of the functional components of the computational stereo paradigm--image acquLsition, camera modeling, feature acquisition, image matching, depth determination, and interpolation--is identified and discussed. Then, the criteria that are important for evaluating the effectiveness of various computational stereo techniques are presented. Finally a representative sampling of computational stereo research is provided.", "title": "" }, { "docid": "548525974665303b813b1614dd39350c", "text": "We present the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill-posed inverse rendering problem using recent advances in image-to-image translation techniques based on deep convolutional encoder-decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. The estimation of material parameters at real-time frame rates enables exciting mixed-reality applications, such as seamless illumination-consistent integration of virtual objects into real-world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.", "title": "" }, { "docid": "313c68843b2521d553772dd024eec202", "text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.", "title": "" }, { "docid": "274373d46b748d92e6913496507353b1", "text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.", "title": "" }, { "docid": "20ce6bde3c15b63cad0a421282dbcdc6", "text": "Baseline detection is still a challenging task for heterogeneous collections of historical documents. We present a novel approach to baseline extraction in such settings, turning out the winning entry to the ICDAR 2017 Competition on Baseline detection (cBAD). It utilizes deep convolutional nets (CNNs) for both, the actual extraction of baselines, as well as for a simple form of layout analysis in a pre-processing step. To the best of our knowledge it is the first CNN-based system for baseline extraction applying a U-net architecture and sliding window detection, profiting from a high local accuracy of the candidate lines extracted. Final baseline post-processing complements our approach, compensating for inaccuracies mainly due to missing context information during sliding window detection. We experimentally evaluate the components of our system individually on the cBAD dataset. Moreover, we investigate how it generalizes to different data by means of the dataset used for the baseline extraction task of the ICDAR 2017 Competition on Layout Analysis for Challenging Medieval Manuscripts (HisDoc). A comparison with the results reported for HisDoc shows that it also outperforms the contestants of the latter.", "title": "" }, { "docid": "c2c0ed74c63c479d772a743a167c18b3", "text": "Neural networks has been successfully used in the processing of Lidar data, especially in the scenario of autonomous driving. However, existing methods heavily rely on pre-processing of the pulse signals derived from Lidar sensors and therefore result in high computational overhead and considerable latency. In this paper, we proposed an approach utilizing Spiking Neural Network (SNN) to address the object recognition problem directly with raw temporal pulses. To help with the evaluation and benchmarking, a comprehensive temporal pulses data-set was created to simulate Lidar reflection in different road scenarios. Being tested with regard to recognition accuracy and time efficiency under different noise conditions, our proposed method shows remarkable performance with the inference accuracy up to 99.83% (with 10% noise) and the average recognition delay as low as 265 ns. It highlights the potential of SNN in autonomous driving and some related applications. In particular, to our best knowledge, this is the first attempt to use SNN to directly perform object recognition on raw Lidar temporal pulses.", "title": "" }, { "docid": "3d47cbee5b76ea68a12f6e026fbc2abf", "text": "This paper presents the first realtime 3D eye gaze capture method that simultaneously captures the coordinated movement of 3D eye gaze, head poses and facial expression deformation using a single RGB camera. Our key idea is to complement a realtime 3D facial performance capture system with an efficient 3D eye gaze tracker. We start the process by automatically detecting important 2D facial features for each frame. The detected facial features are then used to reconstruct 3D head poses and large-scale facial deformation using multi-linear expression deformation models. Next, we introduce a novel user-independent classification method for extracting iris and pupil pixels in each frame. We formulate the 3D eye gaze tracker in the Maximum A Posterior (MAP) framework, which sequentially infers the most probable state of 3D eye gaze at each frame. The eye gaze tracker could fail when eye blinking occurs. We further introduce an efficient eye close detector to improve the robustness and accuracy of the eye gaze tracker. We have tested our system on both live video streams and the Internet videos, demonstrating its accuracy and robustness under a variety of uncontrolled lighting conditions and overcoming significant differences of races, genders, shapes, poses and expressions across individuals.", "title": "" }, { "docid": "98e0f92258df3caf516e257fa40e96b0", "text": "In this paper, we introduce individualness of detection candidates as a complement to objectness for pedestrian detection. The individualness assigns a single detection for each object out of raw detection candidates given by either object proposals or sliding windows. We show that conventional approaches, such as non-maximum suppression, are sub-optimal since they suppress nearby detections using only detection scores. We use a determinantal point process combined with the individualness to optimally select final detections. It models each detection using its quality and similarity to other detections based on the individualness. Then, detections with high detection scores and low correlations are selected by measuring their probability using a determinant of a matrix, which is composed of quality terms on the diagonal entries and similarities on the off-diagonal entries. For concreteness, we focus on the pedestrian detection problem as it is one of the most challenging problems due to frequent occlusions and unpredictable human motions. Experimental results demonstrate that the proposed algorithm works favorably against existing methods, including non-maximal suppression and a quadratic unconstrained binary optimization based method.", "title": "" }, { "docid": "e42ed44464fa4df2514e7560da2eb837", "text": "The combination of the compactness of networks, featuring small diameters, and their complex architectures results in a variety of critical effects dramatically different from those in cooperative systems on lattices. In the last few years, researchers have made important steps toward understanding the qualitatively new critical phenomena in complex networks. We review the results, concepts, and methods of this rapidly developing field. Here we mostly consider two closely related classes of these critical phenomena, namely structural phase transitions in the network architectures and transitions in cooperative models on networks as substrates. We also discuss systems where a network and interacting agents on it influence each other. We overview a wide range of critical phenomena in equilibrium and growing networks including the birth of the giant connected component, percolation, k-core percolation, phenomena near epidemic thresholds, condensation transitions, critical phenomena in spin models placed on networks, synchronization, and self-organized criticality effects in interacting systems on networks. We also discuss strong finite size effects in these systems and highlight open problems and perspectives.", "title": "" }, { "docid": "7643347a62e8835b5cc4b1b432f504c1", "text": "Simulation systems have become an essential component in the development and validation of autonomous driving technologies. The prevailing state-of-the-art approach for simulation is to use game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (e.g., the assets for simulation) remains a manual task that can be costly and time-consuming. In addition, the fidelity of CG images still lacks the richness and authenticity of real-world images and using these images for training leads to degraded performance. In this paper we present a novel approach to address these issues: Augmented Autonomous Driving Simulation (AADS). Our formulation augments real-world pictures with a simulated traffic flow to create photo-realistic simulation images and renderings. More specifically, we use LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generate highly plausible traffic flows for cars and pedestrians and compose them into the background. The composite images can be re-synthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photo-realistic, fully annotated, and ready for end-to-end training and testing of autonomous driving systems from perception to planning. We explain our system design and validate our algorithms with a number of autonomous driving tasks from detection to segmentation and predictions. Compared to traditional approaches, our method offers unmatched scalability and realism. Scalability is particularly important for AD simulation and we believe the complexity and diversity of the real world cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility in a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation of anywhere on earth.", "title": "" }, { "docid": "5a4aa3f4ff68fab80d7809ff04a25a3b", "text": "OBJECTIVE\nThe technique of short segment pedicle screw fixation (SSPSF) has been widely used for stabilization in thoracolumbar burst fractures (TLBFs), but some studies reported high rate of kyphosis recurrence or hardware failure. This study was to evaluate the results of SSPSF including fractured level and to find the risk factors concerned with the kyphosis recurrence in TLBFs.\n\n\nMETHODS\nThis study included 42 patients, including 25 males and 17 females, who underwent SSPSF for stabilization of TLBFs between January 2003 and December 2010. For radiologic assessments, Cobb angle (CA), vertebral wedge angle (VWA), vertebral body compression ratio (VBCR), and difference between VWA and Cobb angle (DbVC) were measured. The relationships between kyphosis recurrence and radiologic parameters or demographic features were investigated. Frankel classification and low back outcome score (LBOS) were used for assessment of clinical outcomes.\n\n\nRESULTS\nThe mean follow-up period was 38.6 months. CA, VWA, and VBCR were improved after SSPSF, and these parameters were well maintained at the final follow-up with minimal degree of correction loss. Kyphosis recurrence showed a significant increase in patients with Denis burst type A, load-sharing classification (LSC) score >6 or DbVC >6 (p<0.05). There were no patients who worsened to clinical outcome, and there was no significant correlation between kyphosis recurrence and clinical outcome in this series.\n\n\nCONCLUSION\nSSPSF including the fractured vertebra is an effective surgical method for restoration and maintenance of vertebral column stability in TLBFs. However, kyphosis recurrence was significantly associated with Denis burst type A fracture, LSC score >6, or DbVC >6.", "title": "" }, { "docid": "c536e79078d7d5778895e5ac7f02c95e", "text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.", "title": "" }, { "docid": "57457909ea5fbee78eccc36c02464942", "text": "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.", "title": "" }, { "docid": "e5bbf88eedf547551d28a731bd4ebed7", "text": "We conduct an empirical study to test the ability of convolutional neural networks (CNNs) to reduce the effects of nuisance transformations of the input data, such as location, scale and aspect ratio. We isolate factors by adopting a common convolutional architecture either deployed globally on the image to compute class posterior distributions, or restricted locally to compute class conditional distributions given location, scale and aspect ratios of bounding boxes determined by proposal heuristics. In theory, averaging the latter should yield inferior performance compared to proper marginalization. Yet empirical evidence suggests the converse, leading us to conclude that - at the current level of complexity of convolutional architectures and scale of the data sets used to train them - CNNs are not very effective at marginalizing nuisance variability. We also quantify the effects of context on the overall classification task and its impact on the performance of CNNs, and propose improved sampling techniques for heuristic proposal schemes that improve end-to-end performance to state-of-the-art levels. We test our hypothesis on a classification task using the ImageNet Challenge benchmark and on a wide-baseline matching task using the Oxford and Fischer's datasets.", "title": "" }, { "docid": "16bfb378b82af79cdb8d82d8e152303a", "text": "Efficient methods for storing and querying are critical for scaling high-order m-gram language models to large corpora. We propose a language model based on compressed suffix trees, a representation that is highly compact and can be easily held in memory, while supporting queries needed in computing language model probabilities on-the-fly. We present several optimisations which improve query runtimes up to 2500×, despite only incurring a modest increase in construction time and memory usage. For large corpora and high Markov orders, our method is highly competitive with the state-of-the-art KenLM package. It imposes much lower memory requirements, often by orders of magnitude, and has runtimes that are either similar (for training) or comparable (for querying).", "title": "" }, { "docid": "9b1f40687d0c9b78efdf6d1e19769294", "text": "The ideal cell type to be used for cartilage therapy should possess a proven chondrogenic capacity, not cause donor-site morbidity, and should be readily expandable in culture without losing their phenotype. There are several cell sources being investigated to promote cartilage regeneration: mature articular chondrocytes, chondrocyte progenitors, and various stem cells. Most recently, stem cells isolated from joint tissue, such as chondrogenic stem/progenitors from cartilage itself, synovial fluid, synovial membrane, and infrapatellar fat pad (IFP) have gained great attention due to their increased chondrogenic capacity over the bone marrow and subcutaneous adipose-derived stem cells. In this review, we first describe the IFP anatomy and compare and contrast it with other adipose tissues, with a particular focus on the embryological and developmental aspects of the tissue. We then discuss the recent advances in IFP stem cells for regenerative medicine. We compare their properties with other stem cell types and discuss an ontogeny relationship with other joint cells and their role on in vivo cartilage repair. We conclude with a perspective for future clinical trials using IFP stem cells.", "title": "" } ]
scidocsrr
bd6beaf1f4ef4fcc9f860dac019de463
Spring Embedders and Force Directed Graph Drawing Algorithms
[ { "docid": "b66846f076d41c8be3f5921cc085d997", "text": "We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are “smoother” and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.", "title": "" }, { "docid": "6073601ab6d6e1dbba7a42c346a29436", "text": "We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.", "title": "" } ]
[ { "docid": "3205184f918eab105ee17bfb12277696", "text": "The Trilobita were characterized by a cephalic region in which the biomineralized exoskeleton showed relatively high morphological differentiation among a taxonomically stable set of well defined segments, and an ontogenetically and taxonomically dynamic trunk region in which both exoskeletal segments and ventral appendages were similar in overall form. Ventral appendages were homonomous biramous limbs throughout both the cephalon and trunk, except for the most anterior appendage pair that was antenniform, preoral, and uniramous, and a posteriormost pair of antenniform cerci, known only in one species. In some clades trunk exoskeletal segments were divided into two batches. In some, but not all, of these clades the boundary between batches coincided with the boundary between the thorax and the adult pygidium. The repeated differentiation of the trunk into two batches of segments from the homonomous trunk condition indicates an evolutionary trend in aspects of body patterning regulation that was achieved independently in several trilobite clades. The phylogenetic placement of trilobites and congruence of broad patterns of tagmosis with those seen among extant arthropods suggest that the expression domains of trilobite cephalic Hox genes may have overlapped in a manner similar to that seen among extant arachnates. This, coupled with the fact that trilobites likely possessed ten Hox genes, presents one alternative to a recent model in which Hox gene distribution in trilobites was equated to eight putative divisions of the trilobite body plan.", "title": "" }, { "docid": "f702a8c28184a6d49cd2f29a1e4e7ea4", "text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.", "title": "" }, { "docid": "89eee86640807e11fa02d0de4862b3a5", "text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.", "title": "" }, { "docid": "3a74928dc955504a12dbfe7cd2deeb16", "text": "Very few large-scale music research datasets are publicly available. There is an increasing need for such datasets, because the shift from physical to digital distribution in the music industry has given the listener access to a large body of music, which needs to be cataloged efficiently and be easily browsable. Additionally, deep learning and feature learning techniques are becoming increasingly popular for music information retrieval applications, and they typically require large amounts of training data to work well. In this paper, we propose to exploit an available large-scale music dataset, the Million Song Dataset (MSD), for classification tasks on other datasets, by reusing models trained on the MSD for feature extraction. This transfer learning approach, which we refer to as supervised pre-training, was previously shown to be very effective for computer vision problems. We show that features learned from MSD audio fragments in a supervised manner, using tag labels and user listening data, consistently outperform features learned in an unsupervised manner in this setting, provided that the learned feature extractor is of limited complexity. We evaluate our approach on the GTZAN, 1517-Artists, Unique and Magnatagatune datasets.", "title": "" }, { "docid": "391cce3ac9ab87e31203637d89a8a082", "text": "MicroRNAs (miRNAs) are small conserved non-coding RNA molecules that post-transcriptionally regulate gene expression by targeting the 3' untranslated region (UTR) of specific messenger RNAs (mRNAs) for degradation or translational repression. miRNA-mediated gene regulation is critical for normal cellular functions such as the cell cycle, differentiation, and apoptosis, and as much as one-third of human mRNAs may be miRNA targets. Emerging evidence has demonstrated that miRNAs play a vital role in the regulation of immunological functions and the prevention of autoimmunity. Here we review the many newly discovered roles of miRNA regulation in immune functions and in the development of autoimmunity and autoimmune disease. Specifically, we discuss the involvement of miRNA regulation in innate and adaptive immune responses, immune cell development, T regulatory cell stability and function, and differential miRNA expression in rheumatoid arthritis and systemic lupus erythematosus.", "title": "" }, { "docid": "12e726dadcb76bfb6dc4f98e8b520347", "text": "Inexact and approximate circuit design is a promising approach to improve performance and energy efficiency in technology-scaled and low-power digital systems. Such strategy is suitable for error-tolerant applications involving perceptive or statistical outputs. This paper presents a novel architecture of an Inexact Speculative Adder with optimized hardware efficiency and advanced compensation technique with either error correction or error reduction. This general topology of speculative adders improves performance and enables precise accuracy control. A brief design methodology and comparative study of this speculative adder are also presented herein, demonstrating power savings up to 26 % and energy-delay-area reductions up to 60% at equivalent accuracy compared to the state-of-the-art.", "title": "" }, { "docid": "fe06ac2458e00c5447a255486189f1d1", "text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.", "title": "" }, { "docid": "6ba37b8e2a8e9f35c7d14d7544aeda61", "text": "In real-world applications, knowledge bases consisting of all the available information for a specific domain, along with the current state of affairs, will typically contain contradictory data, coming from different sources, as well as data with varying degrees of uncertainty attached. An important aspect of the effort associated with maintaining such knowledge bases is deciding what information is no longer useful; pieces of information may be outdated; may come from sources that have recently been discovered to be of low quality; or abundant evidence may be available that contradicts them. In this paper, we propose a probabilistic structured argumentation framework that arises from the extension of Presumptive Defeasible Logic Programming (PreDeLP) with probabilistic models, and argue that this formalism is capable of addressing these basic issues. The formalism is capable of handling contradictory and uncertain data, and we study non-prioritized belief revision over probabilistic PreDeLP programs that can help with knowledge-base maintenance. For belief revision, we propose a set of rationality postulates — based on well-known ones developed for classical knowledge bases — that characterize how these belief revision operations should behave, and study classes of operators along with theoretical relationships with the proposed postulates, including representation theorems stating the equivalence between classes of operators and their associated postulates. We then demonstrate how our framework can be used to address the attribution problem in cyber security/cyber warfare.", "title": "" }, { "docid": "d76e46eec2aa0abcbbd47b8270673efa", "text": "OBJECTIVE\nTo explore the clinical efficacy and the mechanism of acupoint autohemotherapy in the treatment of allergic rhinitis.\n\n\nMETHODS\nForty-five cases were randomized into an autohemotherapy group (24 cases) and a western medication group (21 cases). In the autohemotherapy group, the acupoint autohemotherapy was applied to the bilateral Dingchuan (EX-B 1), Fengmen (BL 12), Feishu (BL 13), Quchi (LI 11), Zusanli (ST 36) and the others. In the western medication group, loratadine tablets were prescribed. The patients were treated continuously for 3 months in both groups. The clinical symptom score was taken for the assessment of clinical efficacy. The enzyme-linked immunoadsordent assay (ELISA) was adopted to determine the contents of serum interferon-gamma (IFN-gamma) and interleukin-12 (IL-12).\n\n\nRESULTS\nThe total effective rate was 83.3% (20/24) in the autohemotherapy group, which was obviously superior to 66.7% (14/21) in the western medication group (P < 0.05). After treatment, the clinical symptom scores of patients in the two groups were all reduced. The improvements in the scores of sneezing and clear nasal discharge in the autohemotherapy group were much more significant than those in the western medication group (both P < 0.05). After treatment, the serum IL-12 content of patients in the two groups was all increased to different extents as compared with that before treatment (both P < 0.05). In the autohemotherapy group, the serum IFN-gamma was increased after treatment (P < 0.05). In the western medication group, the serum IFN-gamma was not increased obviously after treatment (P > 0.05). The increase of the above index contents in the autohemotherapy group were more apparent than those in the western medication group (both P < 0.05).\n\n\nCONCLUSION\nThe acupoint autohemotherapy relieves significantly the clinical symptoms of allergic rhinitis and the therapeutic effect is better than that with oral administration of loratadine tablets, which is probably relevant with the increase of serum IL-12 content and the promotion of IFN-gamma synthesis.", "title": "" }, { "docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d", "text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.", "title": "" }, { "docid": "74927f18642b088b1d2d1ff2c57eb675", "text": "AIM\nThe conventional treatment of a single missing tooth is most frequently based on the provision of a fixed dental prosthesis (FDPs). A variety of designs and restorative materials are available which have an impact on the treatment outcome. Consequently, it was the aim of this review to compare resin-bonded, all-ceramic and metal-ceramic FDPs based on existing evidence.\n\n\nMATERIALS AND METHODS\nAn electronic literature search using \"metal-ceramic\" AND \"fixed dental prosthesis\" AND \"clinical, all-ceramic\" AND \"fixed dental prosthesis\" AND \"clinical, resin-bonded\" AND \"fixed dental prosthesis\" AND \"clinical, fiber reinforced composite\" AND \"clinical, monolithic\" AND \"zirconia\" AND \"clinical\" was conducted and supplemented by the manual searching of bibliographies from articles already included.\n\n\nRESULTS\nA total of 258 relevant articles were identified. Metal-ceramic FDPs still show the highest survival rates of all tooth-supported restorations. Depending on the ceramic system used, all-ceramic restorations may reach comparable survival rates while the technical complications, i.e. chipping fractures of veneering materials in particular, are more frequent. Resin-bonded FDPs can be seen as long-term provisional restorations with the survival rate being higher in anterior locations and when a cantilever design is applied. Inlay-retained FDPs and the use of fiber-reinforced composites overall results in a compromised long-term prognosis. Recently advocated monolithic zirconia restorations bear the risk of low temperature degradation.\n\n\nCONCLUSIONS\nSeveral variables affect treatment planning for a given patient situation, with survival and success rates of different restorative options representing only one factor. The broad variety of designs and materials available for conventional tooth-supported restorations should still be considered as a viable treatment option for single tooth replacement.", "title": "" }, { "docid": "d8b0ef94385d1379baeb499622253a02", "text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.", "title": "" }, { "docid": "7c485c59a1662966d7d8e079c67f43ca", "text": "Given the diversity of recommendation algorithms, choosing one technique is becoming increasingly difficult. In this paper, we explore methods for combining multiple recommendation approaches. We studied rank aggregation methods that have been proposed for the metasearch task (i.e., fusing the outputs of different search engines) but have never been applied to merge top-N recommender systems. These methods require no training data nor parameter tuning. We analysed two families of methods: voting-based and score-based approaches. These rank aggregation techniques yield significant improvements over state-of-the-art top-N recommenders. In particular, score-based methods yielded good results; however, some voting techniques were also competitive without using score information, which may be unavailable in some recommendation scenarios. The studied methods not only improve the state of the art of recommendation algorithms but they are also simple and efficient.", "title": "" }, { "docid": "222f28aa8b4cc4eaddb21e21c9020593", "text": "We study an approach to text categorization that combines di stributional clustering of words and a Support Vector Machine (SVM) classifier. This word-cluster r presentation is computed using the recently introducedInformation Bottleneckmethod, which generates a compact and efficient representation of documents. When combined with the classifica tion power of the SVM, this method yields high performance in text categorization. This novel combination of SVM with word-cluster representation is compared with SVM-based categorization using the simpler bag-of-words (BOW) representation. The comparison is performed over three kno wn datasets. On one of these datasets (the 20 Newsgroups) the method based on word clusters signifi ca tly outperforms the word-based representation in terms of categorization accuracy or repr esentation efficiency. On the two other sets (Reuters-21578 and WebKB) the word-based representation s lightly outperforms the word-cluster representation. We investigate the potential reasons for t his behavior and relate it to structural differences between the datasets.", "title": "" }, { "docid": "1573dcbb7b858ab6802018484f00ef91", "text": "There is a multitude of tools available for Business Model Innovation (BMI). However, Business models (BM) and supporting tools are not yet widely known by micro, small and medium sized companies (SMEs). In this paper, we build on analysis of 61 cases to present typical BMI paths of European SMEs. Firstly, we constructed two paths for established companies that we named as 'I want to grow' and 'I want to make my business profitable'. We also found one path for start-ups: 'I want to start a new business'. Secondly, we suggest appropriate BM toolsets for the three paths. The identified paths and related tools contribute to BMI research and practise with an aim to boost BMI in SMEs.", "title": "" }, { "docid": "90cfe22d4e436e9caa61a2ac198cb7f7", "text": "Deep Neural Networks (DNNs) are fast becoming ubiquitous for their ability to attain good accuracy in various machine learning tasks. A DNN’s architecture (i.e., its hyper-parameters) broadly determines the DNN’s accuracy and performance, and is often confidential. Attacking a DNN in the cloud to obtain its architecture can potentially provide major commercial value. Further, attaining a DNN’s architecture facilitates other, existing DNN attacks. This paper presents Cache Telepathy: a fast and accurate mechanism to steal a DNN’s architecture using the cache side channel. Our attack is based on the insight that DNN inference relies heavily on tiled GEMM (Generalized Matrix Multiply), and that DNN architecture parameters determine the number of GEMM calls and the dimensions of the matrices used in the GEMM functions. Such information can be leaked through the cache side channel. This paper uses Prime+Probe and Flush+Reload to attack VGG and ResNet DNNs running OpenBLAS and Intel MKL libraries. Our attack is effective in helping obtain the architectures by very substantially reducing the search space of target DNN architectures. For example, for VGG using OpenBLAS, it reduces the search space from more than 1035 architectures to just 16.", "title": "" }, { "docid": "b583e130f5066166107e36f766f513ac", "text": "Non-intrusive load monitoring, or energy disaggregation, aims to separate household energy consumption data collected from a single point of measurement into appliance-level consumption data. In recent years, the field has rapidly expanded due to increased interest as national deployments of smart meters have begun in many countries. However, empirically comparing disaggregation algorithms is currently virtually impossible. This is due to the different data sets used, the lack of reference implementations of these algorithms and the variety of accuracy metrics employed. To address this challenge, we present the Non-intrusive Load Monitoring Toolkit (NILMTK); an open source toolkit designed specifically to enable the comparison of energy disaggregation algorithms in a reproducible manner. This work is the first research to compare multiple disaggregation approaches across multiple publicly available data sets. Our toolkit includes parsers for a range of existing data sets, a collection of preprocessing algorithms, a set of statistics for describing data sets, two reference benchmark disaggregation algorithms and a suite of accuracy metrics. We demonstrate the range of reproducible analyses which are made possible by our toolkit, including the analysis of six publicly available data sets and the evaluation of both benchmark disaggregation algorithms across such data sets.", "title": "" }, { "docid": "9c008dc2f3da4453317ce92666184da0", "text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.", "title": "" }, { "docid": "2aa324628b082f1fd6d1e1e0221d1bad", "text": "Recent behavioral investigations have revealed that autistics perform more proficiently on Raven's Standard Progressive Matrices (RSPM) than would be predicted by their Wechsler intelligence scores. A widely-used test of fluid reasoning and intelligence, the RSPM assays abilities to flexibly infer rules, manage goal hierarchies, and perform high-level abstractions. The neural substrates for these abilities are known to encompass a large frontoparietal network, with different processing models placing variable emphasis on the specific roles of the prefrontal or posterior regions. We used functional magnetic resonance imaging to explore the neural bases of autistics' RSPM problem solving. Fifteen autistic and eighteen non-autistic participants, matched on age, sex, manual preference and Wechsler IQ, completed 60 self-paced randomly-ordered RSPM items along with a visually similar 60-item pattern matching comparison task. Accuracy and response times did not differ between groups in the pattern matching task. In the RSPM task, autistics performed with similar accuracy, but with shorter response times, compared to their non-autistic controls. In both the entire sample and a subsample of participants additionally matched on RSPM performance to control for potential response time confounds, neural activity was similar in both groups for the pattern matching task. However, for the RSPM task, autistics displayed relatively increased task-related activity in extrastriate areas (BA18), and decreased activity in the lateral prefrontal cortex (BA9) and the medial posterior parietal cortex (BA7). Visual processing mechanisms may therefore play a more prominent role in reasoning in autistics.", "title": "" }, { "docid": "286dd9575b4de418b0d2daf121306e62", "text": "Absfract—Impedance transforming networks are described which consist of short lengths of relatively high impedance transmission line alternating with short lengths of relatively low impedance line. The sections of transmission line are all exactly the same length (except for corrections for fringing capacitances), and the lengths of the line sections are typically short compared to a quarter wavelength throughout the operating band of the transformer. Tables of designs are presented which give exactly Chebyshev transmission characteristics between resistive terminations having ratios ranging from 1.5 to 10, and for fractional bandwidths ranging from 0.10 to 1.20. These impedance-transforming networks should have application where very compact transmission-line or dielectric-layer impedance transformers are desired.", "title": "" } ]
scidocsrr
f02800775887b28ea5debc405b51badd
Learning and Transferring Social and Item Visibilities for Personalized Recommendation
[ { "docid": "51d950dfb9f71b9c8948198c147b9884", "text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.", "title": "" }, { "docid": "c0f789451f298fb00abc908ee00b4735", "text": "Data sparsity is a major problem for collaborative filtering (CF) techniques in recommender systems, especially for new users and items. We observe that, while our target data are sparse for CF systems, related and relatively dense auxiliary data may already exist in some other more mature application domains. In this paper, we address the data sparsity problem in a target domain by transferring knowledge about both users and items from auxiliary data sources. We observe that in different domains the user feedbacks are often heterogeneous such as ratings vs. clicks. Our solution is to integrate both user and item knowledge in auxiliary data sources through a principled matrix-based transfer learning framework that takes into account the data heterogeneity. In particular, we discover the principle coordinates of both users and items in the auxiliary data matrices, and transfer them to the target domain in order to reduce the effect of data sparsity. We describe our method, which is known as coordinate system transfer or CST, and demonstrate its effectiveness in alleviating the data sparsity problem in collaborative filtering. We show that our proposed method can significantly outperform several state-of-the-art solutions for this problem.", "title": "" } ]
[ { "docid": "01e419d399bd19b9ed1c34c67f1767a9", "text": "By using music written in a certain style as training data, parameters can be calculated for Markov chains and hidden Markov models to capture the musical style of the training data as mathematical models.", "title": "" }, { "docid": "1e38e2e7f3d1f2ae0ac74964f115f89a", "text": "Abstract—In this paper, a high-conversion-ratio bidirectional dc–dc converter with coupled inductor is proposed. In the boost mode, two capacitors are parallel charged and series discharged by the coupled inductor. Thus, high step-up voltage gain can be achieved with an appropriate duty ratio. The voltage stress on the main switch is reduced by a passive clamp circuit. Therefore, the low resistance RDS (ON) of the main switch can be adopted to reduce conduction loss. In the buck mode, two capacitors are series charged and parallel discharged by the coupled inductor. The bidirectional converter can have high step-down gain. Aside from that, all of the switches achieve zero voltage-switching turn-on, and the switching loss can be improved. Due to two active clamp circuits, the energy of the leakage inductor of the coupled inductor is recycled. The efficiency can be further improved. The operating principle and the steady-state analyses of the voltage gain are discussed.", "title": "" }, { "docid": "b712bbcad29af3bb8ad210fc9bbeab24", "text": "Image-based virtual try-on systems for fitting a new in-shop clothes into a person image have attracted increasing research attention, yet is still challenging. A desirable pipeline should not only transform the target clothes into the most fitting shape seamlessly but also preserve well the clothes identity in the generated image, that is, the key characteristics (e.g. texture, logo, embroidery) that depict the original clothes. However, previous image-conditioned generation works fail to meet these critical requirements towards the plausible virtual try-on performance since they fail to handle large spatial misalignment between the input image and target clothes. Prior work explicitly tackled spatial deformation using shape context matching, but failed to preserve clothing details due to its coarse-to-fine strategy. In this work, we propose a new fully-learnable Characteristic-Preserving Virtual Try-On Network (CP-VTON) for addressing all real-world challenges in this task. First, CP-VTON learns a thin-plate spline transformation for transforming the in-shop clothes into fitting the body shape of the target person via a new Geometric Matching Module (GMM) rather than computing correspondences of interest points as prior works did. Second, to alleviate boundary artifacts of warped clothes and make the results more realistic, we employ a Try-On Module that learns a composition mask to integrate the warped clothes and the rendered image to ensure smoothness. Extensive experiments on a fashion dataset demonstrate our CP-VTON achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively.", "title": "" }, { "docid": "b825426604420620e1bba43c0f45115e", "text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.", "title": "" }, { "docid": "420719690b6249322927153daedba87b", "text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.", "title": "" }, { "docid": "c1f17055249341dd6496fce9a2703b18", "text": "With systems performing Simultaneous Localization And Mapping (SLAM) from a single robot reaching considerable maturity, the possibility of employing a team of robots to collaboratively perform a task has been attracting increasing interest. Promising great impact in a plethora of tasks ranging from industrial inspection to digitization of archaeological structures, collaborative scene perception and mapping are key in efficient and effective estimation. In this paper, we propose a novel, centralized architecture for collaborative monocular SLAM employing multiple small Unmanned Aerial Vehicles (UAVs) to act as agents. Each agent is able to independently explore the environment running limited-memory SLAM onboard, while sending all collected information to a central server, a ground station with increased computational resources. The server manages the maps of all agents, triggering loop closure, map fusion, optimization and distribution of information back to the agents. This allows an agent to incorporate observations from others in its SLAM estimates on the fly. We put the proposed framework to the test employing a nominal keyframe-based monocular SLAM algorithm, demonstrating the applicability of this system in multi-UAV scenarios.", "title": "" }, { "docid": "de9ac411ae21f12d1101765b81ba9e0c", "text": "Aarti Singh Department of Computer Science, Guru Nanak Girls College, Yamuna Nagar, Haryana, India Email: singh2208@gmail.com -------------------------------------------------------------------ABSTRACT---------------------------------------------------------Ontologies play a vital role in knowledge representation in artificial intelligent systems. With emergence and acceptance of semantic web and associated services offered to the users, more and more ontologies have been developed by various stack-holders. Different ontologies need to be mapped for various systems to communicate with each other. Ontology mapping is an open research issue in web semantics. Exact mapping of ontologies is rare to achieve so it’s an optimization problem. This work presents and optimized ontology mapping mechanism which deploys genetic algorithm.", "title": "" }, { "docid": "e3978d849b1449c40299841bfd70ea69", "text": "New generations of network intrusion detection systems create the need for advanced pattern-matching engines. This paper presents a novel scheme for pattern-matching, called BFPM, that exploits a hardware-based programmable statemachine technology to achieve deterministic processing rates that are independent of input and pattern characteristics on the order of 10 Gb/s for FPGA and at least 20 Gb/s for ASIC implementations. BFPM supports dynamic updates and is one of the most storage-efficient schemes in the industry, supporting two thousand patterns extracted from Snort with a total of 32 K characters in only 128 KB of memory.", "title": "" }, { "docid": "cc673c5b16be6fb62a69b471d6e24e95", "text": "Estimating 3D human pose from 2D joint locations is central to the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collect a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. Both dataset and prior are available for research purposes. Second, we define a general parametrization of body pose and a new, multi-stage, method to estimate 3D pose from 2D joint locations using an over-complete dictionary of poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results using manual annotations on real images and automatic detections on the Leeds sports pose dataset.", "title": "" }, { "docid": "becea3d4b1a791b74dc7c6de15584611", "text": "This study analyzes the climate change and economic impacts of food waste in the United States. Using lossadjusted national food availability data for 134 food commodities, it calculates the greenhouse gas emissions due to wasted food using life cycle assessment and the economic cost of the waste using retail prices. The analysis shows that avoidable food waste in the US exceeds 55 million metric tonnes per year, nearly 29% of annual production. This waste produces life-cycle greenhouse gas emissions of at least 113 million metric tonnes of CO2e annually, equivalent to 2% of national emissions, and costs $198 billion.", "title": "" }, { "docid": "35c7cb759c1ee8e7f547d9789e74b0f0", "text": "This research investigates an axial flux single-rotor single-stator asynchronous motor (AFAM) with aluminum and copper cage windings. In order to avoid using die casting of the rotor cage winding an open rotor slot structure was implemented. In future, this technique allows using copper cage winding avoiding critically high temperature treatment as in the die casting processing of copper material. However, an open slot structure leads to a large equivalent air gap length. Therefore, semi-magnetic wedges should be used to reduce the effect of open slots and consequently to improve the machine performance. The paper aims to investigate the feasibility of using open slot rotor structure (for avoiding die casting) and impact of semi-magnetic wedges to eliminate negative effects of open slots. The results were mainly obtained by 2D finite element method (FEM) simulations. Measurement results of mechanical performance of the prototype (with aluminum cage winding) given in the paper prove the simulated results.", "title": "" }, { "docid": "693417e5608cf092842ab34ee8cce8d9", "text": "Software as a Service has become a dominant IT news topic over the last few years. Especially in these current recession times, adopting SaaS solutions is increasingly becoming the more favorable alternative for customers rather than investing on brand new on-premise software or outsourcing. This fact has inevitably stimulated the birth of numerous SaaS vendors. Unfortunately, many small-to-medium vendors have emerged only to disappear again from the market. A lack of maturity in their pricing strategy often becomes part of the reason. This paper presents the ’Pricing Strategy Guideline Framework (PSGF)’ that assists SaaS vendors with a guideline to ensure that all the fundamental pricing elements are included in their pricing strategy. The PSGF describes five different layers that need to be taken to price a software: value creation, price structure, price and value communication, price policy, and price level. The PSGF can be of particularly great use for the startup vendors that tend to have less experience in pricing their SaaS solutions. Up until now, there have been no SaaS pricing frameworks available in the SaaS research area, such as the PSGF developed in this research. The PSGF is evaluated in a case study at a Dutch SaaS vendor in the Finance sector.", "title": "" }, { "docid": "7442f94af36f6d317291da814e7f3676", "text": "Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h(-1)) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h(-1)) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.", "title": "" }, { "docid": "f5128625b3687c971ba3bef98d7c2d2a", "text": "In three experiments, we investigated the influence of juror, victim, and case factors on mock jurors' decisions in several types of child sexual assault cases (incest, day care, stranger abduction, and teacher-perpetrated abuse). We also validated and tested the ability of several scales measuring empathy for child victims, children's believability, and opposition to adult/child sex, to mediate the effect of jurors' gender on case judgments. Supporting a theoretical model derived from research on the perceived credibility of adult rape victims, women compared to men were more empathic toward child victims, more opposed to adult/child sex, more pro-women, and more inclined to believe children generally. In turn, women (versus men) made more pro-victim judgments in hypothetical abuse cases; that is, attitudes and empathy generally mediated this juror gender effect that is pervasive in this literature. The experiments also revealed that strength of case evidence is a powerful factor in determining judgments, and that teen victims (14 years old) are blamed more for sexual abuse than are younger children (5 years old), but that perceptions of 5 and 10 year olds are largely similar. Our last experiment illustrated that our findings of mediation generalize to a community member sample.", "title": "" }, { "docid": "41cf7f09815ad0a8ebac914eaeaa44e3", "text": "Robotic devices are well-suited to provide high intensity upper limb therapy in order to induce plasticity and facilitate recovery from brain and spinal cord injury. In order to realize gains in functional independence, devices that target the distal joints of the arm are necessary. Further, the robotic device must exhibit key dynamic properties that enable both high dynamic transparency for assessment, and implementation of novel interaction control modes that significantly engage the participant. In this paper, we present the kinematic design, dynamical characterization, and clinical validation of the RiceWrist-S, a serial robotic mechanism that facilitates rehabilitation of the forearm in pronation-supination, and of the wrist in flexion-extension and radial-ulnar deviation. The RiceWrist-Grip, a grip force sensing handle, is shown to provide grip force measurements that correlate well with those acquired from a hand dynamometer. Clinical validation via a single case study of incomplete spinal cord injury rehabilitation for an individual with injury at the C3-5 level showed moderate gains in clinical outcome measures. Robotic measures of movement smoothness also captured gains, supporting our hypothesis that intensive upper limb rehabilitation with the RiceWrist-S would show beneficial outcomes. This work was supported in part by grants from Mission Connect, a project of the TIRR Foundation, the National Science Foundation Graduate Research Fellowship Program under Grant No. 0940902, NSF CNS-1135916, and H133P0800007-NIDRRARRT. A.U. Pehlivan, F. Sergi, A. Erwin, and M. K. O’Malley are with the Mechatronics and Haptic Interfaces Laboratory, Department of Mechanical Engineering, Rice University, Houston, TX 77005. F. Sergi is also with the department of PM&R, Baylor College of Medicine, Houston, TX 77030. N. Yozbatiran and G. E. Francisco are with the Department of PM&R and UTHealth Motor Recovery Lab, University of Texas Health Science Center at Houston, TX 77030 (e-mails: aliutku@rice.edu, fabs@rice.edu, ace7@rice.edu, Nuray.Yozbatiran@uth.tmc.edu, Gerard.E.Francisco@uth.tmc.edu, and omalleym@rice.edu)", "title": "" }, { "docid": "6da5e3f263171d93e2a1d6fe8e38a788", "text": "With the thriving growth of the cloud computing, the security and privacy concerns of outsourcing data have been increasing dramatically. However, because of delegating the management of data to an untrusted cloud server in data outsourcing process, the data access control has been recognized as a challenging issue in cloud storage systems. One of the preeminent technologies to control data access in cloud computing is Attribute-based Encryption (ABE) as a cryptographic primitive, which establishes the decryption ability on the basis of a user’s attributes. This paper provides a comprehensive survey on attribute-based access control schemes and compares each scheme’s functionality and characteristic. We also present a thematic taxonomy of attribute-based approaches based on significant parameters, such as access control mode, architecture, revocation mode, revocation method, revocation issue, and revocation controller. The paper reviews the state-of-the-art ABE methods and categorizes them into three main classes, such as centralized, decentralized, and hierarchal, based on their architectures. We also analyzed the different ABE techniques to ascertain the advantages and disadvantages, the significance and requirements, and identifies the research gaps. Finally, the paper presents open issues and challenges for further investigations. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cb667b5d3dd2e680f15b7167d20734cd", "text": "In this letter, a low loss high isolation broadband single-port double-throw (SPDT) traveling-wave switch using 90 nm CMOS technology is presented. A body bias technique is utilized to enhance the circuit performance of the switch, especially for the operation frequency above 30 GHz. The parasitic capacitance between the drain and source of the NMOS transistor can be further reduced using the negative body bias technique. Moreover, the insertion loss, the input 1 dB compression point (P1 dB)> and the third-order intermodulation (IMD3) of the switch are all improved. With the technique, the switch demonstrates an insertion loss of 3 dB and an isolation of better than 48 dB from dc to 60 GHz. The chip size of the proposed switch is 0.68 × 0.87 mm2 with a core area of only 0.32 × 0.21 mm2.", "title": "" }, { "docid": "80563d90bfdccd97d9da0f7276468a43", "text": "An essential aspect of knowing language is knowing the words of that language. This knowledge is usually thought to reside in the mental lexicon, a kind of dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on. In this article, a very different view is presented. In this view, words are understood as stimuli that operate directly on mental states. The phonological, syntactic and semantic properties of a word are revealed by the effects it has on those states.", "title": "" }, { "docid": "c55d17ec5082c2c5f12b22520b359c91", "text": "Android apps are made of components which can leak information between one another using the ICC mechanism. With the growing momentum of Android, a number of research contributions have led to tools for the intra-app analysis of Android apps. Unfortunately, these state-of-the-art approaches, and the associated tools, have long left out the security flaws that arise across the boundaries of single apps, in the interaction between several apps. In this paper, we present a tool called ApkCombiner which aims at reducing an inter-app communication problem to an intra-app inter-component communication problem. In practice, ApkCombiner combines different apps into a single apk on which existing tools can indirectly perform inter-app analysis. We have evaluated ApkCombiner on a dataset of 3,000 real-world Android apps, to demonstrate its capability to support static context-aware inter-app analysis scenarios.", "title": "" } ]
scidocsrr
749466410f80db68ff91b3e2a31105c2
Subjectivity and sentiment analysis of Arabic: Trends and challenges
[ { "docid": "c757cc329886c1192b82f36c3bed8b7f", "text": "Though much research has been conducted on Subjectivity and Sentiment Analysis (SSA) during the last decade, little work has focused on Arabic. In this work, we focus on SSA for both Modern Standard Arabic (MSA) news articles and dialectal Arabic microblogs from Twitter. We showcase some of the challenges associated with SSA on microblogs. We adopted a random graph walk approach to extend the Arabic SSA lexicon using ArabicEnglish phrase tables, leading to improvements for SSA on Arabic microblogs. We used different features for both subjectivity and sentiment classification including stemming, part-of-speech tagging, as well as tweet specific features. Our classification features yield results that surpass Arabic SSA results in the literature.", "title": "" }, { "docid": "3553d1dc8272bf0366b2688e5107aa3f", "text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.", "title": "" } ]
[ { "docid": "93dd889fe9be3209be31e77c7191ac17", "text": "The aim of this review is to provide greater insight and understanding regarding the scientific nature of cycling. Research findings are presented in a practical manner for their direct application to cycling. The two parts of this review provide information that is useful to athletes, coaches and exercise scientists in the prescription of training regimens, adoption of exercise protocols and creation of research designs. Here for the first time, we present rationale to dispute prevailing myths linked to erroneous concepts and terminology surrounding the sport of cycling. In some studies, a review of the cycling literature revealed incomplete characterisation of athletic performance, lack of appropriate controls and small subject numbers, thereby complicating the understanding of the cycling research. Moreover, a mixture of cycling testing equipment coupled with a multitude of exercise protocols stresses the reliability and validity of the findings. Our scrutiny of the literature revealed key cycling performance-determining variables and their training-induced metabolic responses. The review of training strategies provides guidelines that will assist in the design of aerobic and anaerobic training protocols. Paradoxically, while maximal oxygen uptake (V-O(2max)) is generally not considered a valid indicator of cycling performance when it is coupled with other markers of exercise performance (e.g. blood lactate, power output, metabolic thresholds and efficiency/economy), it is found to gain predictive credibility. The positive facets of lactate metabolism dispel the 'lactic acid myth'. Lactate is shown to lower hydrogen ion concentrations rather than raise them, thereby retarding acidosis. Every aspect of lactate production is shown to be advantageous to cycling performance. To minimise the effects of muscle fatigue, the efficacy of employing a combination of different high cycling cadences is evident. The subconscious fatigue avoidance mechanism 'teleoanticipation' system serves to set the tolerable upper limits of competitive effort in order to assure the athlete completion of the physical challenge. Physiological markers found to be predictive of cycling performance include: (i) power output at the lactate threshold (LT2); (ii) peak power output (W(peak)) indicating a power/weight ratio of > or =5.5 W/kg; (iii) the percentage of type I fibres in the vastus lateralis; (iv) maximal lactate steady-state, representing the highest exercise intensity at which blood lactate concentration remains stable; (v) W(peak) at LT2; and (vi) W(peak) during a maximal cycling test. Furthermore, the unique breathing pattern, characterised by a lack of tachypnoeic shift, found in professional cyclists may enhance the efficiency and metabolic cost of breathing. The training impulse is useful to characterise exercise intensity and load during training and competition. It serves to enable the cyclist or coach to evaluate the effects of training strategies and may well serve to predict the cyclist's performance. Findings indicate that peripheral adaptations in working muscles play a more important role for enhanced submaximal cycling capacity than central adaptations. Clearly, relatively brief but intense sprint training can enhance both glycolytic and oxidative enzyme activity, maximum short-term power output and V-O(2max). To that end, it is suggested to replace approximately 15% of normal training with one of the interval exercise protocols. Tapering, through reduction in duration of training sessions or the frequency of sessions per week while maintaining intensity, is extremely effective for improvement of cycling time-trial performance. Overuse and over-training disabilities common to the competitive cyclist, if untreated, can lead to delayed recovery.", "title": "" }, { "docid": "559637a4f8f5b99bb3210c5c7d03d2e0", "text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.", "title": "" }, { "docid": "2752c235aea735a04b70272deb042ea6", "text": "Psychophysiological studies with music have not examined what exactly in the music might be responsible for the observed physiological phenomena. The authors explored the relationships between 11 structural features of 16 musical excerpts and both self-reports of felt pleasantness and arousal and different physiological measures (respiration, skin conductance, heart rate). Overall, the relationships between musical features and experienced emotions corresponded well with those known between musical structure and perceived emotions. This suggests that the internal structure of the music played a primary role in the induction of the emotions in comparison to extramusical factors. Mode, harmonic complexity, and rhythmic articulation best differentiated between negative and positive valence, whereas tempo, accentuation, and rhythmic articulation best discriminated high arousal from low arousal. Tempo, accentuation, and rhythmic articulation were the features that most strongly correlated with physiological measures. Music that induced faster breathing and higher minute ventilation, skin conductance, and heart rate was fast, accentuated, and staccato. This finding corroborates the contention that rhythmic aspects are the major determinants of physiological responses to music.", "title": "" }, { "docid": "0c7b5a51a0698f261d147b2aa77acc83", "text": "The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer vision research. To tackle these challenges, we present a social media image processing pipeline that combines human and machine intelligence to perform two important tasks: (i) capturing and filtering of social media imagery content (i.e., real-time image streaming, de-duplication, and relevancy filtering); and (ii) actionable information extraction (i.e., damage severity assessment) as a core situational awareness task during an on-going crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.", "title": "" }, { "docid": "62d76b82614c64d022409081c71796a5", "text": "The statistical modeling of large multi-relational datasets has increasingly gained attention in recent years. Typical applications involve large knowledge bases like DBpedia, Freebase, YAGO and the recently introduced Google Knowledge Graph that contain millions of entities, hundreds and thousands of relations, and billions of relational tuples. Collective factorization methods have been shown to scale up to these large multi-relational datasets, in particular in form of tensor approaches that can exploit the highly scalable alternating least squares (ALS) algorithms for calculating the factors. In this paper we extend the recently proposed state-of-the-art RESCAL tensor factorization to consider relational type-constraints. Relational type-constraints explicitly define the logic of relations by excluding entities from the subject or object role. In addition we will show that in absence of prior knowledge about type-constraints, local closed-world assumptions can be approximated for each relation by ignoring unobserved subject or object entities in a relation. In our experiments on representative large datasets (Cora, DBpedia), that contain up to millions of entities and hundreds of type-constrained relations, we show that the proposed approach is scalable. It further significantly outperforms RESCAL without type-constraints in both, runtime and prediction quality.", "title": "" }, { "docid": "6fb0aac60ec74b5efca4eeda22be979d", "text": "Images captured in hazy or foggy weather conditions are seriously degraded by the scattering of atmospheric particles, which directly influences the performance of outdoor computer vision systems. In this paper, a fast algorithm for single image dehazing is proposed based on linear transformation by assuming that a linear relationship exists in the minimum channel between the hazy image and the haze-free image. First, the principle of linear transformation is analyzed. Accordingly, the method of estimating a medium transmission map is detailed and the weakening strategies are introduced to solve the problem of the brightest areas of distortion. To accurately estimate the atmospheric light, an additional channel method is proposed based on quad-tree subdivision. In this method, average grays and gradients in the region are employed as assessment criteria. Finally, the haze-free image is obtained using the atmospheric scattering model. Numerous experimental results show that this algorithm can clearly and naturally recover the image, especially at the edges of sudden changes in the depth of field. It can, thus, achieve a good effect for single image dehazing. Furthermore, the algorithmic time complexity is a linear function of the image size. This has obvious advantages in running time by guaranteeing a balance between the running speed and the processing effect.", "title": "" }, { "docid": "903b68096d2559f0e50c38387260b9c8", "text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.", "title": "" }, { "docid": "cf121f496ae49eed2846b5be05d35d4d", "text": "Objective: This study provides evidence for the validity and reliability of the Rey Auditory Verbal Learning Test", "title": "" }, { "docid": "d9cdbff5533837858b1cd8334acd128d", "text": "A four-leaf steel spring used in the rear suspension system of light vehicles is analyzed using ANSYS V5.4 software. The finite element results showing stresses and deflections verified the existing analytical and experimental solutions. Using the results of the steel leaf spring, a composite one made from fiberglass with epoxy resin is designed and optimized using ANSYS. Main consideration is given to the optimization of the spring geometry. The objective was to obtain a spring with minimum weight that is capable of carrying given static external forces without failure. The design constraints were stresses (Tsai–Wu failure criterion) and displacements. The results showed that an optimum spring width decreases hyperbolically and the thickness increases linearly from the spring eyes towards the axle seat. Compared to the steel spring, the optimized composite spring has stresses that are much lower, the natural frequency is higher and the spring weight without eye units is nearly 80% lower. 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "deca482835114a5a0fd6dbdc62ae54d0", "text": "This paper presents an approach to design the transformer and the link inductor for the high-frequency link matrix converter. The proposed method aims to systematize the design process of the HF-link using analytic and software tools. The models for the characterization of the core and winding losses have been reviewed. Considerations about the practical implementation and construction of the magnetic devices are also provided. The software receives the inputs from the mathematical analysis and runs the optimization to find the best design. A 10 kW / 20 kHz transformer plus a link inductor are designed using this strategy achieving a combined efficiency of 99.32%.", "title": "" }, { "docid": "c2d926337d32cf88838546d19e6f9bde", "text": "This paper discusses the use of natural language or „conversational‟ agents in e-learning environments. We describe and contrast the various applications of conversational agent technology represented in the e-learning literature, including tutors, learning companions, language practice and systems to encourage reflection. We offer two more detailed examples of conversational agents, one which provides learning support, and the other support for self-assessment. Issues and challenges for developers of conversational agent systems for e-learning are identified and discussed.", "title": "" }, { "docid": "8b5ea4603ac53a837c3e81dfe953a706", "text": "Many teaching practices implicitly assume that conceptual knowledge can be abstracted from the situations in which it is learned and used. This article argues that this assumption inevitably limits the effectiveness of such practices. Drawing on recent research into cognition as it is manifest in everyday activity, the authors argue that knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used. They discuss how this view of knowledge affects our understanding of learning, and they note that conventional schooling too often ignores the influence of school culture on what is learned in school. As an alternative to conventional practices, they propose cognitive apprenticeship (Collins, Brown, Newman, in press), which honors the situated nature of knowledge. They examine two examples of mathematics instruction that exhibit certain key features of this approach to teaching. The breach between learning and use, which is captured by the folk categories \"know what\" and \"know how,\" may well be a product of the structure and practices of our education system. Many methods of didactic education assume a separation between knowing and doing, treating knowledge as an integral, self-sufficient substance, theoretically independent of the situations in which it is learned and used. The primary concern of schools often seems to be the transfer of this substance, which comprises abstract, decontextualized formal concepts. The activity and context in which learning takes place are thus regarded as merely ancillary to learning---pedagogically useful, of course, but fundamentally distinct and even neutral with respect to what is learned. Recent investigations of learning, however, challenge this separating of what is learned from how it is learned and used. The activity in which knowledge is developed and deployed, it is now argued, is not separable from or ancillary to learning and cognition. Nor is it neutral. Rather, it is an integral part of what is learned. Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated. In this paper, we try to explain in a deliberately speculative way, why activity and situations are integral to cognition and learning, and how different ideas of what is appropriate learning activity produce very different results. We suggest that, by ignoring the situated nature of cognition, education defeats its own goal of providing useable, robust knowledge. And conversely, we argue that approaches such as cognitive apprenticeship (Collins, Brown, & Newman, in press) that embed learning in activity and make deliberate use of the social and physical context are more in line with the understanding of learning and cognition that is emerging from research. Situated Knowledge and Learning Miller and Gildea's (1987) work on vocabulary teaching has shown how the assumption that knowing and doing can be separated leads to a teaching method that ignores the way situations structure cognition. Their work has described how children are taught words from dictionary definitions and a few exemplary sentences, and they have compared this method with the way vocabulary is normally learned outside school. People generally learn words in the context of ordinary communication. This process is startlingly fast and successful. Miller and Gildea note that by listening, talking, and reading, the average 17-year-old has learned vocabulary at a rate of 5,000 words per year (13 per day) for over 16 years. By contrast, learning words from abstract definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way:definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way: \"Me and my parents correlate, because without them I wouldn't be here.\" \"I was meticulous about falling off the cliff.\" \"Mrs. Morrow stimulated the soup.\" Given the method, such mistakes seem unavoidable. Teaching from dictionaries assumes that definitions and exemplary sentences are self-contained \"pieces\" of knowledge. But words and sentences are not islands, entire unto themselves. Language use would involve an unremitting confrontation with ambiguity, polysemy, nuance, metaphor, and so forth were these not resolved with the extralinguistic help that the context of an utterance provides (Nunberg, 1978). Prominent among the intricacies of language that depend on extralinguistic help are indexical words --words like I, here, now, next, tomorrow, afterwards, this. Indexical terms are those that \"index\"or more plainly point to a part of the situation in which communication is being conducted. They are not merely contextsensitive; they are completely context-dependent. Words like I or now, for instance, can only be interpreted in the 'context of their use. Surprisingly, all words can be seen as at least partially indexical (Barwise & Perry, 1983). Experienced readers implicitly understand that words are situated. They, therefore, ask for the rest of the sentence or the context before committing themselves to an interpretation of a word. And they go to dictionaries with situated examples of usage in mind. The situation as well as the dictionary supports the interpretation. But the students who produced the sentences listed had no support from a normal communicative situation. In tasks like theirs, dictionary definitions are assumed to be self-sufficient. The extralinguistic props that would structure, constrain, and ultimately allow interpretation in normal communication are ignored. Learning from dictionaries, like any method that tries to teach abstract concepts independently of authentic situations, overlooks the way understanding is developed through continued, situated use. This development, which involves complex social negotiations, does not crystallize into a categorical definition. Because it is dependent on situations and negotiations, the meaning of a word cannot, in principle, be captured by a definition, even when the definition is supported by a couple of exemplary sentences. All knowledge is, we believe, like language. Its constituent parts index the world and so are inextricably a product of the activity and situations in which they are produced. A concept, for example, will continually evolve with each new occasion of use, because new situations, negotiations, and activities inevitably recast it in a new, more densely textured form. So a concept, like the meaning of a word, is always under construction. This would also appear to be true of apparently well-defined, abstract technical concepts. Even these are not wholly definable and defy categorical description; part of their meaning is always inherited from the context of use. Learning and tools. To explore the idea that concepts are both situated and progressively developed through activity, we should abandon any notion that they are abstract, self-contained entities. Instead, it may be more useful to consider conceptual knowledge as, in some ways, similar to a set of tools. Tools share several significant features with knowledge: They can only be fully understood through use, and using them entails both changing the user's view of the world and adopting the belief system of the culture in which they are used. First, if knowledge is thought of as tools, we can illustrate Whitehead's (1929) distinction between the mere acquisition of inert concepts and the development of useful, robust knowledge. It is quite possible to acquire a tool but to be unable to use it. Similarly, it is common for students to acquire algorithms, routines, and decontextualized definitions that they cannot use and that, therefore, lie inert. Unfortunately, this problem is not always apparent. Old-fashioned pocket knives, for example, have a device for removing stones from horses' hooves. People with this device may know its use and be able to talk wisely about horses, hooves, and stones. But they may never betray --or even recognize --that they would not begin to know how to use this implement on a horse. Similarly, students can often manipulate algorithms, routines, and definitions they have acquired with apparent competence and yet not reveal, to their teachers or themselves, that they would have no idea what to do if they came upon the domain equivalent of a limping horse. People who use tools actively rather than just acquire them, by contrast, build an increasingly rich implicit understanding of the world in which they use the tools and of the tools themselves. The understanding, both of the world and of the tool, continually changes as a result of their interaction. Learning and acting are interestingly indistinct, learning being a continuous, life-long process resulting from acting in situations. Learning how to use a tool involves far more than can be accounted for in any set of explicit rules. The occasions and conditions for use arise directly out of the context of activities of each community that uses the tool, framed by the way members of that community see the world. The community and its viewpoint, quite as much as the tool itself, determine how a tool is used. Thus, carpenters and cabinet makers use chisels differently. Because tools and the way they are used reflect the particular accumulated insights of communities, it is not ", "title": "" }, { "docid": "e28ee6e29f61652f752ef311ebb40eaa", "text": "The increasing prevalence of Distributed Denial of Service (DDoS) attacks on the Internet has led to the wide adoption of DDoS Protection Service (DPS), which is typically provided by Content Delivery Networks (CDNs) and is integrated with CDN's security extensions. The effectiveness of DPS mainly relies on hiding the IP address of an origin server and rerouting the traffic to the DPS provider's distributed infrastructure, where malicious traffic can be blocked. In this paper, we perform a measurement study on the usage dynamics of DPS customers and reveal a new vulnerability in DPS platforms, called residual resolution, by which a DPS provider may leak origin IP addresses when its customers terminate the service or switch to other platforms, resulting in the failure of protection from future DPS providers as adversaries are able to discover the origin IP addresses and launch the DDoS attack directly to the origin servers. We identify that two major DPS/CDN providers, Cloudflare and Incapsula, are vulnerable to such residual resolution exposure, and we then assess the magnitude of the problem in the wild. Finally, we discuss the root causes of residual resolution and the practical countermeasures to address this security vulnerability.", "title": "" }, { "docid": "40db41aa0289dbf45bef067f7d3e3748", "text": "Maximum reach envelopes for the 5th, 50th and 95th percentile reach lengths of males and females in seated and standing work positions were determined. The use of a computerized potentiometric measurement system permitted functional reach measurement in 15 min for each subject. The measurement system captured reach endpoints in a dynamic mode while the subjects were describing their maximum reach envelopes. An unbiased estimate of the true reach distances was made through a systematic computerized data averaging process. The maximum reach envelope for the standing position was significantly (p<0.05) larger than the corresponding measure in the seated position for both the males and females. The average reach length of the female was 13.5% smaller than that for the corresponding male. Potential applications of this research include designs of industrial workstations, equipment, tools and products.", "title": "" }, { "docid": "0e6ed8195ef4ebadf86d881770c78137", "text": "In mixed radio-frequency (RF) and digital designs, noise from high-speed digital circuits can interfere with RF receivers, resulting in RF interference issues such as receiver desensitization. In this paper, an effective methodology is proposed to estimate the RF interference received by an antenna due to near-field coupling, which is one of the common noise-coupling mechanisms, using decomposition method based on reciprocity. In other words, the noise-coupling problem is divided into two steps. In the first step, the coupling from the noise source to a Huygens surface that encloses the antenna is studied, with the actual antenna structure removed, and the induced tangential electromagnetic fields due to the noise source on this surface are obtained. In the second step, the antenna itself with the same Huygens surface is studied. The antenna is treated as a transmitting one and the induced tangential electromagnetic fields on the surface are obtained. Then, the reciprocity theory is used and the noise power coupled to the antenna port in the original problem is estimated based on the results obtained in the two steps. The proposed methodology is validated through comparisons with full-wave simulations. It fits well with engineering practice, and is particularly suitable for prelayout wireless system design and planning.", "title": "" }, { "docid": "88033862d9fac08702977f1232c91f3a", "text": "Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.", "title": "" }, { "docid": "a280f710b0e41d844f1b9c76e7404694", "text": "Self-determination theory posits that the degree to which a prosocial act is volitional or autonomous predicts its effect on well-being and that psychological need satisfaction mediates this relation. Four studies tested the impact of autonomous and controlled motivation for helping others on well-being and explored effects on other outcomes of helping for both helpers and recipients. Study 1 used a diary method to assess daily relations between prosocial behaviors and helper well-being and tested mediating effects of basic psychological need satisfaction. Study 2 examined the effect of choice on motivation and consequences of autonomous versus controlled helping using an experimental design. Study 3 examined the consequences of autonomous versus controlled helping for both helpers and recipients in a dyadic task. Finally, Study 4 manipulated motivation to predict helper and recipient outcomes. Findings support the idea that autonomous motivation for helping yields benefits for both helper and recipient through greater need satisfaction. Limitations and implications are discussed.", "title": "" }, { "docid": "c2e53358f9d78071fc5204624cf9d6ad", "text": "This paper explores how the adoption of mobile and social computing technologies has impacted upon the way in which we coordinate social group-activities. We present a diary study of 36 individuals that provides an overview of how group coordination is currently performed as well as the challenges people face. Our findings highlight that people primarily use open-channel communication tools (e.g., text messaging, phone calls, email) to coordinate because the alternatives are seen as either disrupting or curbing to the natural conversational processes. Yet the use of open-channel tools often results in conversational overload and a significant disparity of work between coordinating individuals. This in turn often leads to a sense of frustration and confusion about coordination details. We discuss how the findings argue for a significant shift in our thinking about the design of coordination support systems.", "title": "" }, { "docid": "67f13c2b686593398320d8273d53852f", "text": "Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/.", "title": "" }, { "docid": "09f812cae6c8952d27ef86168906ece8", "text": "Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception.<<ETX>>", "title": "" } ]
scidocsrr
30b65372568a42a27adee77a0e0fed25
Incentives for Mobile Crowd Sensing: A Survey
[ { "docid": "acdcdae606f9c046aab912075d4ec609", "text": "Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such data streams. How should systems valuate and negotiate access to private information, for example in return for monetary incentives? How should they optimally choose the participants from a large population of strategic users with privacy concerns, and compensate them for information shared? In this paper, we address these questions and present a novel mechanism, SEQTGREEDY, for budgeted recruitment of participants in community sensing. We first show that privacy tradeoffs in community sensing can be cast as an adaptive submodular optimization problem. We then design a budget feasible, incentive compatible (truthful) mechanism for adaptive submodular maximization, which achieves near-optimal utility for a large class of sensing applications. This mechanism is general, and of independent interest. We demonstrate the effectiveness of our approach in a case study of air quality monitoring, using data collected from the Mechanical Turk platform. Compared to the state of the art, our approach achieves up to 30% reduction in cost in order to achieve a desired level of utility.", "title": "" }, { "docid": "a1367b21acfebfe35edf541cdc6e3f48", "text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.", "title": "" }, { "docid": "382ed9f0bbc8492d6aa10917dd3a53d0", "text": "Can WiFi signals be used for sensing purpose? The growing PHY layer capabilities of WiFi has made it possible to reuse WiFi signals for both communication and sensing. Sensing via WiFi would enable remote sensing without wearable sensors, simultaneous perception and data transmission without extra communication infrastructure, and contactless sensing in privacy-preserving mode. Due to the popularity of WiFi devices and the ubiquitous deployment of WiFi networks, WiFi-based sensing networks, if fully connected, would potentially rank as one of the world’s largest wireless sensor networks. Yet the concept of wireless and sensorless sensing is not the simple combination of WiFi and radar. It seeks breakthroughs from dedicated radar systems, and aims to balance between low cost and high accuracy, to meet the rising demand for pervasive environment perception in everyday life. Despite increasing research interest, wireless sensing is still in its infancy. Through introductions on basic principles and working prototypes, we review the feasibilities and limitations of wireless, sensorless, and contactless sensing via WiFi. We envision this article as a brief primer on wireless sensing for interested readers to explore this open and largely unexplored field and create next-generation wireless and mobile computing applications.", "title": "" }, { "docid": "bdadf0088654060b3f1c749ead0eea6e", "text": "This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.The field of pervasive games is diverse in the approaches and technologies used to create new and exciting gaming experiences that profit by the blend of real and virtual game elements. We explicitly look at the pervasive gaming sub-genres of smart toys, affective games, tabletop games, location-aware games, and augmented reality games, and discuss them in terms of their benefits and critical issues, as well as the relevant technology base.", "title": "" } ]
[ { "docid": "44b71e1429f731cc2d91f919182f95a4", "text": "Power management of multi-core processors is extremely important because it allows power/energy savings when all cores are not used. OS directed power management according to ACPI (Advanced Power and Configurations Interface) specifications is the common approach that industry has adopted for this purpose. While operating systems are capable of such power management, heuristics for effectively managing the power are still evolving. The granularity at which the cores are slowed down/turned off should be designed considering the phase behavior of the workloads. Using 3-D, video creation, office and e-learning applications from the SYSmark benchmark suite, we study the challenges in power management of a multi-core processor such as the AMD Quad-Core Opteron\" and Phenom\". We unveil effects of the idle core frequency on the performance and power of the active cores. We adjust the idle core frequency to have the least detrimental effect on the active core performance. We present optimized hardware and operating system configurations that reduce average active power by 30% while reducing performance by an average of less than 3%. We also present complete system measurements and power breakdown between the various systems components using the SYSmark and SPEC CPU workloads. It is observed that the processor core and the disk consume the most power, with core having the highest variability.", "title": "" }, { "docid": "ffc2079d68489ea7fae9f55ffd288018", "text": "Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally.", "title": "" }, { "docid": "b324860905b6d8c4b4a8429d53f2543d", "text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.", "title": "" }, { "docid": "78f1b3a8b9aeff9fb860b46d6a2d8eab", "text": "We study the possibility to extend the concept of linguistic data summaries employing the notion of bipolarity. Yager's linguistic summaries may be derived using a fuzzy linguistic querying interface. We look for a similar analogy between bipolar queries and the extended form of linguistic summaries. The general concept of bipolar query, and its special interpretation are recalled, which turns out to be applicable to accomplish our goal. Some preliminary results are presented and possible directions of further research are pointed out.", "title": "" }, { "docid": "7c81ddf6b7e6853ac1d964f1c0accd40", "text": "DSM-5 distinguishes between paraphilias and paraphilic disorders. Paraphilias are defined as atypical, yet not necessarily disordered, sexual practices. Paraphilic disorders are instead diseases, which include distress, impairment in functioning, or entail risk of harm one's self or others. Hence, DSM-5 new approach to paraphilias demedicalizes and destigmatizes unusual sexual behaviors, provided they are not distressing or detrimental to self or others. Asphyxiophilia, a dangerous and potentially deadly form of sexual masochism involving sexual arousal by oxygen deprivation, are clearly described as disorders. Although autoerotic asphyxia has been associated with estimated mortality rates ranging from 250 to 1000 deaths per year in the United States, in Italy, knowledge on this condition is very poor. Episodes of death caused by autoerotic asphyxia seem to be underestimated because it often can be confounded with suicide cases, particularly in the Italian context where family members of the victim often try to disguise autoerotic behaviors of the victims. The current paper provides a review on sexual masochism disorder with asphyxiophilia and discusses one specific case as an example to examine those conditions that may or may not influence the likelihood that death from autoerotic asphyxia be erroneously reported as suicide or accidental injury.", "title": "" }, { "docid": "d822157e1fd65e8ec6da4601deb65b06", "text": "Bartholin's duct cysts and gland abscesses are common problems in women of reproductive age. Bartholin's glands are located bilaterally at the posterior introitus and drain through ducts that empty into the vestibule at approximately the 4 o'clock and 8 o'clock positions. These normally pea-sized glands are palpable only if the duct becomes cystic or a gland abscess develops. The differential diagnosis includes cystic and solid lesions of the vulva, such as epidermal inclusion cyst, Skene's duct cyst, hidradenoma papilliferum, and lipoma. The goal of management is to preserve the gland and its function if possible. Office-based procedures include insertion of a Word catheter for a duct cyst or gland abscess, and marsupialization of a cyst; marsupialization should not be used to treat a gland abscess. Broad-spectrum antibiotic therapy is warranted only when cellulitis is present. Excisional biopsy is reserved for use in ruling out adenocarcinoma in menopausal or perimenopausal women with an irregular, nodular Bartholin's gland mass.", "title": "" }, { "docid": "40555c2dc50a099ff129f60631f59c0d", "text": "As new technologies and information delivery systems emerge, the way in which individuals search for information to support research, teaching, and creative activities is changing. To understand different aspects of researchers’ information-seeking behavior, this article surveyed 2,063 academic researchers in natural science, engineering, and medical science from five research universities in the United States. A Web-based, in-depth questionnaire was designed to quantify researchers’ information searching, information use, and information storage behaviors. Descriptive statistics are reported.", "title": "" }, { "docid": "cb85db604bf21751766daf3751dd73bd", "text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.", "title": "" }, { "docid": "1705ba479a7ff33eef46e0102d4d4dd0", "text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.", "title": "" }, { "docid": "5a46d347e83aec7624dde84ecdd5302c", "text": "This paper presents a new algorithm to automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. To obtain a robust decision surface, we train a log-linear model to make the margin between the correct assignments and the false ones as large as possible. This results in a quadratic programming (QP) problem which can be efficiently solved. Experimental results show that our algorithm achieves 79.7% accuracy, about 10% higher than the state-of-the-art baseline (Kushman et al., 2014).", "title": "" }, { "docid": "057efe6414f7a38f2c8580f8f507c9d0", "text": "Film and television play an important role in popular culture. Their study, however, often requires watching and annotating video, a time-consuming process too expensive to run at scale. In this paper we study the evolution of different roles over time at a large scale by using media database cast lists. In particular, we focus on the gender distribution of those roles and how this changes over time. We compare real-life employment gender distributions to our web-mediated onscreen gender data and also investigate how gender role biases differ between film and television. We propose that these methodologies are a useful complement to traditional analysis and allow researchers to explore onscreen gender depictions using online evidence.", "title": "" }, { "docid": "334a7f34bca3452bb472d9071705c2bc", "text": "This paper addresses the analysis of oscillator phase-noise effects on the self-interference cancellation capability of full-duplex direct-conversion radio transceivers. Closed-form solutions are derived for the power of the residual self-interference stemming from phase noise in two alternative cases of having either independent oscillators or the same oscillator at the transmitter and receiver chains of the full-duplex transceiver. The results show that phase noise has a severe effect on self-interference cancellation in both of the considered cases, and that by using the common oscillator in upconversion and downconversion results in clearly lower residual self-interference levels. The results also show that it is in general vital to use high quality oscillators in full-duplex transceivers, or have some means for phase noise estimation and mitigation in order to suppress its effects. One of the main findings is that in practical scenarios the subcarrier-wise phase-noise spread of the multipath components of the self-interference channel causes most of the residual phase-noise effect when high amounts of self-interference cancellation is desired.", "title": "" }, { "docid": "4122d900e0f527d4e9ed1005a68b95bf", "text": "We present a method that learns to tell rear signals from a number of frames using a deep learning framework. The proposed framework extracts spatial features with a convolution neural network (CNN), and then applies a long short term memory (LSTM) network to learn the long-term dependencies. The brake signal classifier is trained using RGB frames, while the turn signal is recognized via a two-step localization approach. The two separate classifiers are learned to recognize the static brake signals and the dynamic turn signals. As a result, our recognition system can recognize 8 different rear signals via the combined two classifiers in real-world traffic scenes. Experimental results show that our method is able to obtain more accurate predictions than using only the CNN to classify rear signals with time sequence inputs.", "title": "" }, { "docid": "f31b3c4a2a8f3f05c3391deb1660ce75", "text": "In the field of providing mobility for the elderly or disabled the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting continued development of the “Nagasaki Stairclimber”, a duel section tracked wheelchair capable of negotiating the large number of twisting and irregular stairs typically encounted by the residents living on the slopes that surround the Nagasaki harbor. Recent developments include an auto guidance system, auto leveling of the chair angle and active control of the frontrear track angle.", "title": "" }, { "docid": "a3f5d2fb8bfa71b6f974a871a4ae2e5f", "text": "Recent years have witnessed the popularity of using recurrent neural network (RNN) for action recognition in videos. However, videos are of high dimensionality and contain rich human dynamics with various motion scales, which makes the traditional RNNs difficult to capture complex action information. In this paper, we propose a novel recurrent spatial-temporal attention network (RSTAN) to address this challenge, where we introduce a spatial-temporal attention mechanism to adaptively identify key features from the global video context for every time-step prediction of RNN. More specifically, we make three main contributions from the following aspects. First, we reinforce the classical long short-term memory (LSTM) with a novel spatial-temporal attention module. At each time step, our module can automatically learn a spatial-temporal action representation from all sampled video frames, which is compact and highly relevant to the prediction at the current step. Second, we design an attention-driven appearance-motion fusion strategy to integrate appearance and motion LSTMs into a unified framework, where LSTMs with their spatial-temporal attention modules in two streams can be jointly trained in an end-to-end fashion. Third, we develop actor-attention regularization for RSTAN, which can guide our attention mechanism to focus on the important action regions around actors. We evaluate the proposed RSTAN on the benchmark UCF101, HMDB51 and JHMDB data sets. The experimental results show that, our RSTAN outperforms other recent RNN-based approaches on UCF101 and HMDB51 as well as achieves the state-of-the-art on JHMDB.", "title": "" }, { "docid": "15c3ddb9c01d114ab7d09f010195465b", "text": "In this paper we have described a solution for supporting independent living of the elderly by means of equipping their home with a simple sensor network to monitor their behaviour. Standard home automation sensors including movement sensors and door entry point sensors are used. By monitoring the sensor data, important information regarding any anomalous behaviour will be identified. Different ways of visualizing large sensor data sets and representing them in a format suitable for clustering the abnormalities are also investigated. In the latter part of this paper, recurrent neural networks are used to predict the future values of the activities for each sensor. The predicted values are used to inform the caregiver in case anomalous behaviour is predicted in the near future. Data collection, classification and prediction are investigated in real home environments with elderly occupants suffering from dementia.", "title": "" }, { "docid": "ba8ae795796d9d5c1d33d4e5ce692a13", "text": "This work presents a type of capacitive sensor for intraocular pressure (IOP) measurement on soft contact lens with Radio Frequency Identification (RFID) module. The flexible capacitive IOP sensor and Rx antenna was designed and fabricated using MEMS fabrication technologies that can be embedded on a soft contact lens. The IOP sensing unit is a sandwich structure composed of parylene C as the substrate and the insulating layer, gold as the top and bottom electrodes of the capacitor, and Hydroxyethylmethacrylate (HEMA) as dielectric material between top plate and bottom plate. The main sensing principle is using wireless IOP contact lenses sensor (CLS) system placed on corneal to detect the corneal deformation caused due to the variations of IOP. The variations of intraocular pressure will be transformed into capacitance change and this change will be transmitted to RFID system and recorded as continuous IOP monitoring. The measurement on in-vitro porcine eyes show the pressure reproducibility and a sensitivity of 0.02 pF/4.5 mmHg.", "title": "" }, { "docid": "b8c8511489622220f9347daede5f31e8", "text": "Recently, different systems which learn to populate and extend a knowledge base (KB) from the web in different languages have been presented. Although a large set of concepts should be learnt independently from the language used to read, there are facts which are expected to be more easily gathered in local language (e.g., culture or geography). A system that merges KBs learnt in different languages will benefit from the complementary information as long as common beliefs are identified, as well as from redundancy present in web pages written in different languages. In this paper, we deal with the problem of identifying equivalent beliefs (or concepts) across language specific KBs, assuming that they share the same ontology of categories and relations. In a case study with two KBs independently learnt from different inputs, namely web pages written in English and web pages written in Portuguese respectively, we report on the results of two methodologies: an approach based on personalized PageRank and an inference technique to find out common relevant paths through the KBs. The proposed inference technique efficiently identifies relevant paths, outperforming the baseline (a dictionary-based classifier) in the vast majority of tested categories.", "title": "" }, { "docid": "6200d3c4435ae34e912fc8d2f92e904b", "text": "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter $\\alpha$ is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "title": "" }, { "docid": "60f2baba7922543e453a3956eb503c05", "text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.", "title": "" } ]
scidocsrr
06bf6b1c3ad2f5fb1261ddd6fb80f033
DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning
[ { "docid": "541ebcc2e081ea1a08bbaba2e9820510", "text": "We present an analytic study on the language of news media in the context of political fact-checking and fake news detection. We compare the language of real news with that of satire, hoaxes, and propaganda to find linguistic characteristics of untrustworthy text. To probe the feasibility of automatic political fact-checking, we also present a case study based on PolitiFact.com using their factuality judgments on a 6-point scale. Experiments show that while media fact-checking remains to be an open research question, stylistic cues can help determine the truthfulness of text.", "title": "" }, { "docid": "26cedddd8a5a5f3a947fd6c85b8c41ad", "text": "In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97% accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.", "title": "" } ]
[ { "docid": "e78e70d347fb76a79755442cabe1fbe0", "text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.", "title": "" }, { "docid": "d0649a8b51f61ead177dc60838d749b4", "text": "Reduction otoplasty is an uncommon procedure performed for macrotia and ear asymmetry. Techniques described in the literature for this procedure are few. The authors present their ear reduction approach that not only achieves the desired reduction effectively and accurately, but also addresses and creates the natural anatomic proportions of the ear, leaving a scar well hidden within the fold of the helix.", "title": "" }, { "docid": "5c2297cf5892ebf9864850dc1afe9cbf", "text": "In this paper, we propose a novel technique for generating images in the 3D domain from images with high degree of geometrical transformations. By coalescing two popular concurrent methods that have seen rapid ascension to the machine learning zeitgeist in recent years: GANs (Goodfellow et. al.) and Capsule networks (Sabour, Hinton et. al.) we present: CapsGAN. We show that CapsGAN performs better than or equal to traditional CNN based GANs in generating images with high geometric transformations using rotated MNIST. In the process, we also show the efficacy of using capsules architecture in the GANs domain. Furthermore, we tackle the Gordian Knot in training GANs the performance control and training stability by experimenting with using Wasserstein distance (gradient clipping, penalty) and Spectral Normalization. The experimental findings of this paper should propel the application of capsules and GANs in the still exciting and nascent domain of 3D image generation, and plausibly video (frame) generation.", "title": "" }, { "docid": "6b81fe23d8c2cb7ad7d296546a3cdadf", "text": "Please cite this article in press as: H.J. Oh Vis. Comput. (2008), doi:10.1016/j.imavis In this paper, we propose a novel occlusion invariant face recognition algorithm based on Selective Local Non-negative Matrix Factorization (S-LNMF) technique. The proposed algorithm is composed of two phases; the occlusion detection phase and the selective LNMF-based recognition phase. We use a local approach to effectively detect partial occlusions in an input face image. A face image is first divided into a finite number of disjointed local patches, and then each patch is represented by PCA (Principal Component Analysis), obtained by corresponding occlusion-free patches of training images. And the 1-NN threshold classifier is used for occlusion detection for each patch in the corresponding PCA space. In the recognition phase, by employing the LNMF-based face representation, we exclusively use the LNMF bases of occlusion-free image patches for face recognition. Euclidean nearest neighbor rule is applied for the matching. We have performed experiments on AR face database that includes many occluded face images by sunglasses and scarves. The experimental results demonstrate that the proposed local patch-based occlusion detection technique works well and the S-LNMF method shows superior performance to other conventional approaches. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0423596618a9d779c9ca5f3d899fdfe6", "text": "An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.", "title": "" }, { "docid": "2cd53bcf5d0df4cfafd1801378ab20d5", "text": "0191-8869/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.paid.2009.07.008 * Corresponding author. Tel.: +1 251 460 6548. E-mail address: foster@usouthal.edu (J.D. Foster). 1 Narcissistic personality is best thought of as a dim (Foster & Campbell, 2007). We use the term ‘‘narciss matter of convenience to refer to individuals who sco measures of narcissism, such as a the NPI. Much prior research demonstrates that narcissists take more risks than others, but almost no research has examined what motivates this behavior. The present study tested two potential driving mechanisms of risk-taking by narcissists (i.e., heightened perceptions of benefits and diminished perceptions of risks stemming from risky behaviors) by administering survey measures of narcissism and risk-taking to a sample of 605 undergraduate college students. Contrary to what might be expected, the results suggest that narcissists appreciate the risks associated with risky behaviors just as much as do less narcissistic individuals. Their risk-taking appears to instead be fueled by heightened perceptions of benefits stemming from risky behaviors. These results are consistent with a growing body of evidence suggesting that narcissists engage in some forms of potentially problematic behaviors, such as risk-taking, because of a surplus of eagerness rather than a deficit of inhibition. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "c04cf54a40cd84961657bf50153ff68b", "text": "Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.", "title": "" }, { "docid": "9b4800f8cd89cce37bada95cf044b1a0", "text": "Jumping is used in nature by many small animals to locomote in cluttered environments or in rough terrain. It offers small systems the benefit of overcoming relatively large obstacles at a low energetic cost. In order to be able to perform repetitive jumps in a given direction, it is important to be able to upright after landing, steer and jump again. In this article, we review and evaluate the uprighting and steering principles of existing jumping robots and present a novel spherical robot with a mass of 14 g and a size of 18 cm that can jump up to 62 cm at a take-off angle of 75°, recover passively after landing, orient itself, and jump again. We describe its design details and fabrication methods, characterize its jumping performance, and demonstrate the remote controlled prototype repetitively moving over an obstacle course where it has to climb stairs and go through a window. (See videos 1–4 in the electronic supplementary", "title": "" }, { "docid": "b417b412334d8d5ce931f93f564df528", "text": "The field of dataset shift has received a growing amount of interest in the last few years. The fact that most real-world applications have to cope with some form of shift makes its study highly relevant. The literature on the topic is mostly scattered, and different authors use different names to refer to the same concepts, or use the same name for different concepts. With this work, we attempt to present a unifying framework through the review and comparison of some of the most important works in the", "title": "" }, { "docid": "c4b6df3abf37409d6a6a19646334bffb", "text": "Classification in imbalanced domains is a recent challenge in data mining. We refer to imbalanced classification when data presents many examples from one class and few from the other class, and the less representative class is the one which has more interest from the point of view of the learning task. One of the most used techniques to tackle this problem consists in preprocessing the data previously to the learning process. This preprocessing could be done through under-sampling; removing examples, mainly belonging to the majority class; and over-sampling, by means of replicating or generating new minority examples. In this paper, we propose an under-sampling procedure guided by evolutionary algorithms to perform a training set selection for enhancing the decision trees obtained by the C4.5 algorithm and the rule sets obtained by PART rule induction algorithm. The proposal has been compared with other under-sampling and over-sampling techniques and the results indicate that the new approach is very competitive in terms of accuracy when comparing with over-sampling and it outperforms standard under-sampling. Moreover, the obtained models are smaller in terms of number of leaves or rules generated and they can considered more interpretable. The results have been contrasted through non-parametric statistical tests over multiple data sets. Crown Copyright 2009 Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9ece98aee7056ff6c686c12bcdd41d31", "text": "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.", "title": "" }, { "docid": "e306933b27867c99585d7fc82cc380ff", "text": "We introduce a new OS abstraction—light-weight contexts (lwCs)—that provides independent units of protection, privilege, and execution state within a process. A process may include several lwCs, each with possibly different views of memory, file descriptors, and access capabilities. lwCs can be used to efficiently implement roll-back (process can return to a prior recorded state), isolated address spaces (lwCs within the process may have different views of memory, e.g., isolating sensitive data from network-facing components or isolating different user sessions), and privilege separation (in-process reference monitors can arbitrate and control access). lwCs can be implemented efficiently: the overhead of a lwC is proportional to the amount of memory exclusive to the lwC; switching lwCs is quicker than switching kernel threads within the same process. We describe the lwC abstraction and API, and an implementation of lwCs within the FreeBSD 11.0 kernel. Finally, we present an evaluation of common usage patterns, including fast rollback, session isolation, sensitive data isolation, and inprocess reference monitoring, using Apache, nginx, PHP, and OpenSSL.", "title": "" }, { "docid": "34a21bf5241d8cc3a7a83e78f8e37c96", "text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.", "title": "" }, { "docid": "8e2f1f2c73ca3f9754348dd938d4f897", "text": "During the long history of computer vision, one of the grand challenges has been semantic segmentation which is the ability to segment an unknown image into different parts and objects (e.g., beach, ocean, sun, dog, swimmer). Furthermore, segmentation is even deeper than object recognition because recognition is not necessary for segmentation. Specifically, humans can perform image segmentation without even knowing what the objects are (for example, in satellite imagery or medical X-ray scans, there may be several objects which are unknown, but they can still be segmented within the image typically for further investigation). Performing segmentation without knowing the exact identity of all objects in the scene is an important part of our visual understanding process which can give us a powerful model to understand the world and also be used to improve or augment existing computer vision techniques. Herein this work, we review the field of semantic segmentation as pertaining to deep convolutional neural networks. We provide comprehensive coverage of the top approaches and summarize the strengths, weaknesses and major challenges.", "title": "" }, { "docid": "4f967ef2b57a7e22e61fb4f26286f69a", "text": "Chemical imaging technology is a rapid examination technique that combines molecular spectroscopy and digital imaging, providing information on morphology, composition, structure, and concentration of a material. Among many other applications, chemical imaging offers an array of novel analytical testing methods, which limits sample preparation and provides high-quality imaging data essential in the detection of latent fingerprints. Luminescence chemical imaging and visible absorbance chemical imaging have been successfully applied to ninhydrin, DFO, cyanoacrylate, and luminescent dye-treated latent fingerprints, demonstrating the potential of this technology to aid forensic investigations. In addition, visible absorption chemical imaging has been applied successfully to visualize untreated latent fingerprints.", "title": "" }, { "docid": "4b8ed77a97d2eb2c83ae49da7db9314f", "text": "From early 1967 to the summer of 1969, the bolic nest-site disnlav between male and feauthor had the opportunity to observe the male during thei; precopulatory courtship.” behavior of a pair of African Ostrich (Struthio The courtship is initiated by male and fecamelus) and their hand-raised chicks in the male as they begin to feed, often with heads Oklahoma City Zoo. Because agonistic behavclose together, while pecking in a nervous, iors could be observed almost daily throughhighly synchronized fashion. As the excitation out the year, and courtship for at least 5 mounts, “the two birds walk towards and months, and because the author knew of very around an area chosen for the symbolic nestfew accounts of ostrich behavior, it was desite display by the male. He throws his wings cided to accumulate as much information up in an alternating rhythm of right-left, flashfrom these birds as possible. Sauer and Sauer ing his white wing feathers. Then suddenlv he (In: The living bird, Vol. 5, Cornell Univ. drops to the groind and begins nesting symPress, Ithaca, 1966, p. 45-76) in their study, bolically in a very exaggerated manner, whirlmainly from the Namib Desert Game Reserve ing dust when his wings sweep the ground. 3, pointed out that the hens having molted earAt the same time he twists his neck in a wav lier than the males initiate the prenuptial that resembles a continuous ‘ corkscrew adactivities. “They will posture and stand very tion’ .” The female responds by walking with erect, urinate and defecate, and otherwise lowered head, curved downward pointing behave in exaggerated manners in front of wings, and drooping tail. When finally she potential or familiar mates.” They become squats on the ground, the cock gets up and increasingly aggressive toward birds other rushes toward her with flapping wings and than the male they court, and particularly so mounts her. toward immature birds. The males begin their courtship later, at which time a red coloration AIM, SUBJECTS, AND METHOD OF of their shins, feet, and faces appears. Their STUDY ceremonial rivalries toward one another beIn the present study a more detailed analysis of agocome increasingly frequent. They may be nistic and courtship displays was attempted than those seen “chasing around in groups, wings held known to the author. The principal subjects were a high, and ‘ dancing’ in flocks numbering up to couple of birds belonging to, and housed in, the Okla-", "title": "" }, { "docid": "bdaa8b87cdaef856b88b7397ddc77d97", "text": "In artificial neural networks (ANNs), the activation function most used in practice are the logistic sigmoid function and the hyperbolic tangent function. The activation functions used in ANNs have been said to play an important role in the convergence of the learning algorithms. In this paper, we evaluate the use of different activation functions and suggest the use of three new simple functions, complementary log-log, probit and log-log, as activation functions in order to improve the performance of neural networks. Financial time series were used to evaluate the performance of ANNs models using these new activation functions and to compare their performance with some activation functions existing in the literature. This evaluation is performed through two learning algorithms: conjugate gradient backpropagation with Fletcher–Reeves updates and Levenberg–Marquardt.", "title": "" }, { "docid": "584456ef251fbf31363832fc82bd3d42", "text": "Neural network architectures found by sophistic search algorithms achieve strikingly good test performance, surpassing most human-crafted network models by significant margins. Although computationally efficient, their design is often very complex, impairing execution speed. Additionally, finding models outside of the search space is not possible by design. While our space is still limited, we implement undiscoverable expert knowledge into the economic search algorithm Efficient Neural Architecture Search (ENAS), guided by the design principles and architecture of ShuffleNet V2. While maintaining baselinelike 2.85% test error on CIFAR-10, our ShuffleNASNets are significantly less complex, require fewer parameters, and are two times faster than the ENAS baseline in a classification task. These models also scale well to a low parameter space, achieving less than 5% test error with little regularization and only 236K parameters.", "title": "" }, { "docid": "c81fb61f8c12dfe3bb88d417d9ec645a", "text": "Existing timeline generation systems for complex events consider only information from traditional media, ignoring the rich social context provided by user-generated content that reveals representative public interests or insightful opinions. We instead aim to generate socially-informed timelines that contain both news article summaries and selected user comments. We present an optimization framework designed to balance topical cohesion between the article and comment summaries along with their informativeness and coverage of the event. Automatic evaluations on real-world datasets that cover four complex events show that our system produces more informative timelines than state-of-theart systems. In human evaluation, the associated comment summaries are furthermore rated more insightful than editor’s picks and comments ranked highly by users.", "title": "" } ]
scidocsrr
436abca7cd898f03ecbe6f230c5bf4ce
Virtual Machine Introspection: Techniques and Applications
[ { "docid": "4d9397a14425e13f9e4b7a340008f416", "text": "Applying optimized security settings to applications is a difficult and laborious task. Especially in cloud computing, where virtual servers with various pre-installed software packages are leased, selecting optimized security settings is very difficult. In particular, optimized security settings are not identical in every setup. They depend on characteristics of the setup, on the ways an application is used or on other applications running on the same system. Configuring optimized settings given these interdependencies is a complex and time-consuming task. In this work, we present an autonomous agent which improves security settings of applications which run in virtual servers. The agent retrieves custom-made security settings for a target application by investigating its specific setup, it tests and transparently changes settings via introspection techniques unbeknownst from the perspective of the virtual server. During setting selection, the application's operation is not disturbed nor any user interaction is needed. Since optimal settings can change over time or they can change depending on different tasks the application handles, the agent can continuously adapt settings as well as improve them periodically. We call this approach hot-hardening and present results of an implementation that can hot-harden popular networking applications such as Apache2 and OpenSSH.", "title": "" }, { "docid": "79503c15b37209892fa7cfe02c90f967", "text": "To direct the operation of a computer, we often use a shell, a user interface that provides accesses to the OS kernel services. Traditionally, shells are designed atop an OS kernel. In this paper, we show that a shell can also be designed below an OS. More specifically, we present HYPERSHELL, a practical hypervisor layer guest OS shell that has all of the functionality of a traditional shell, but offers better automation, uniformity and centralized management. This will be particularly useful for cloud and data center providers to manage the running VMs in a large scale. To overcome the semantic gap challenge, we introduce a reverse system call abstraction, and we show that this abstraction can significantly relieve the painful process of developing software below an OS. More importantly, we also show that this abstraction can be implemented transparently. As such, many of the legacy guest OS management utilities can be directly reused in HYPERSHELL without any modification. Our evaluation with over one hundred management utilities demonstrates that HYPERSHELL has 2.73X slowdown on average compared to their native in-VM execution, and has less than 5% overhead to the guest OS kernel.", "title": "" }, { "docid": "6e666fdd26ea00a6eebf7359bdf82329", "text": "Kernel-level attacks or rootkits can compromise the security of an operating system by executing with the privilege of the kernel. Current approaches use virtualization to gain higher privilege over these attacks, and isolate security tools from the untrusted guest VM by moving them out and placing them in a separate trusted VM. Although out-of-VM isolation can help ensure security, the added overhead of world-switches between the guest VMs for each invocation of the monitor makes this approach unsuitable for many applications, especially fine-grained monitoring. In this paper, we present Secure In-VM Monitoring (SIM), a general-purpose framework that enables security monitoring applications to be placed back in the untrusted guest VM for efficiency without sacrificing the security guarantees provided by running them outside of the VM. We utilize contemporary hardware memory protection and hardware virtualization features available in recent processors to create a hypervisor protected address space where a monitor can execute and access data in native speeds and to which execution is transferred in a controlled manner that does not require hypervisor involvement. We have developed a prototype into KVM utilizing Intel VT hardware virtualization technology. We have also developed two representative applications for the Windows OS that monitor system calls and process creations. Our microbenchmarks show at least 10 times performance improvement in invocation of a monitor inside SIM over a monitor residing in another trusted VM. With a systematic security analysis of SIM against a number of possible threats, we show that SIM provides at least the same security guarantees as what can be achieved by out-of-VM monitors.", "title": "" }, { "docid": "b6e67047ac710fa619c809839412231c", "text": "An essential goal of Virtual Machine Introspection (VMI) is assuring security policy enforcement and overall functionality in the presence of an untrustworthy OS. A fundamental obstacle to this goal is the difficulty in accurately extracting semantic meaning from the hypervisor's hardware level view of a guest OS, called the semantic gap. Over the twelve years since the semantic gap was identified, immense progress has been made in developing powerful VMI tools. Unfortunately, much of this progress has been made at the cost of reintroducing trust into the guest OS, often in direct contradiction to the underlying threat model motivating the introspection. Although this choice is reasonable in some contexts and has facilitated progress, the ultimate goal of reducing the trusted computing base of software systems is best served by a fresh look at the VMI design space. This paper organizes previous work based on the essential design considerations when building a VMI system, and then explains how these design choices dictate the trust model and security properties of the overall system. The paper then observes portions of the VMI design space which have been under-explored, as well as potential adaptations of existing techniques to bridge the semantic gap without trusting the guest OS. Overall, this paper aims to create an essential checkpoint in the broader quest for meaningful trust in virtualized environments through VM introspection.", "title": "" } ]
[ { "docid": "8f704e4c4c2a0c696864116559a0f22c", "text": "Friendships with competitors can improve the performance of organizations through the mechanisms of enhanced collaboration, mitigated competition, and better information exchange. Moreover, these benefits are best achieved when competing managers are embedded in a cohesive network of friendships (i.e., one with many friendships among competitors), since cohesion facilitates the verification of information culled from the network, eliminates the structural holes faced by customers, and facilitates the normative control of competitors. The first part of this analysis examines the performance implications of the friendship-network structure within the Sydney hotel industry, with performance being the yield (i.e., revenue per available room) of a given hotel. This shows that friendships with competitors lead to dramatic improvements in hotel yields. Performance is further improved if a manager’s competitors are themselves friends, evidencing the benefit of cohesive friendship networks. The second part of the analysis examines the structure of friendship ties among hotel managers and shows that friendships are more likely between managers who are competitors.", "title": "" }, { "docid": "39d1271ce88b840b8d75806faf9463ad", "text": "Dynamically Reconfigurable Systems (DRS), implemented using Field-Programmable Gate Arrays (FPGAs), allow hardware logic to be partially reconfigured while the rest of a design continues to operate. By mapping multiple reconfigurable hardware modules to the same physical region of an FPGA, such systems are able to time-multiplex their circuits at run time and can adapt to changing execution requirements. This architectural flexibility introduces challenges for verifying system functionality. New simulation approaches need to extend traditional simulation techniques to assist designers in testing and debugging the time-varying behavior of DRS. Another significant challenge is the effective use of tools so as to reduce the number of design iterations. This thesis focuses on simulation-based functional verification of modular reconfigurable DRS designs. We propose a methodology and provide tools to assist designers in verifying DRS designs while part of the design is undergoing reconfiguration. This thesis analyzes the challenges in verifying DRS designs with respect to the user design and the physical implementation of such systems. We propose using a simulationonly layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. The simulation-only layer maintains verification productivity by abstracting away the physical details of the FPGA fabric. Furthermore, since the design does not need to be modified for simulation purposes, the design as implemented instead of some variation of it is verified. We provide two possible implementations of the simulation-only layer. Extended ReChannel is a SystemC library that can be used to model DRS at a high level. ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, we demonstrate that with insignificant overheads, our approach seamlessly integrates with the existing, mainstream DRS design flow and with wellestablished verification methodologies such as top-down modeling and coverage-driven verification. The case studies also serve as a guide in the use of our libraries to identify bugs that are related to Dynamic Partial Reconfiguration. Our results demonstrate that using the simulation-only layer is an effective approach to the simulation-based functional verification of DRS designs.", "title": "" }, { "docid": "d1f02e2f57cffbc17387de37506fddc9", "text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.", "title": "" }, { "docid": "c95980f3f1921426c20757e6020f62c2", "text": "Recent successes of deep learning have been largely driven by the ability to train large models on vast amounts of data. We believe that High Performance Computing (HPC) will play an increasingly important role in helping deep learning achieve the next level of innovation fueled by neural network models that are orders of magnitude larger and trained on commensurately more training data. We are targeting the unique capabilities of both current and upcoming HPC systems to train massive neural networks and are developing the Livermore Big Artificial Neural Network (LBANN) toolkit to exploit both model and data parallelism optimized for large scale HPC resources. This paper presents our preliminary results in scaling the size of model that can be trained with the LBANN toolkit.", "title": "" }, { "docid": "480c55bca0099f25a01fe7a9701eef6a", "text": "Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences. * Corresponding author", "title": "" }, { "docid": "186141651bfb780865712deb8c407c54", "text": "Sample and statistically based singing synthesizers typically require a large amount of data for automatically generating expressive synthetic performances. In this paper we present a singing synthesizer that using two rather small databases is able to generate expressive synthesis from an input consisting of notes and lyrics. The system is based on unit selection and uses the Wide-Band Harmonic Sinusoidal Model for transforming samples. The first database focuses on expression and consists of less than 2 minutes of free expressive singing using solely vowels. The second one is the timbre database which for the English case consists of roughly 35 minutes of monotonic singing of a set of sentences, one syllable per beat. The synthesis is divided in two steps. First, an expressive vowel singing performance of the target song is generated using the expression database. Next, this performance is used as input control of the synthesis using the timbre database and the target lyrics. A selection of synthetic performances have been submitted to the Interspeech Singing Synthesis Challenge 2016, in which they are compared to other competing systems.", "title": "" }, { "docid": "5e50ff15898a96b9dec220331c62820d", "text": "BACKGROUND AND PURPOSE\nPatients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients with versus without previous IS/TIA.\n\n\nMETHODS\nENGAGE AF-TIMI 48 (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48) was a double-blind trial of 21 105 patients with atrial fibrillation randomized to warfarin (international normalized ratio, 2.0-3.0; median time-in-therapeutic range, 68.4%) versus once-daily edoxaban (higher-dose edoxaban regimen [HDER], 60/30 mg; lower-dose edoxaban regimen, 30/15 mg) with 2.8-year median follow-up. Primary end points included all stroke/systemic embolic events (efficacy) and major bleeding (safety). Because only HDER is approved, we focused on the comparison of HDER versus warfarin.\n\n\nRESULTS\nOf 5973 (28.3%) patients with previous IS/TIA, 67% had CHADS2 (congestive heart failure, hypertension, age, diabetes, prior stroke/transient ischemic attack) >3 and 36% were ≥75 years. Compared with 15 132 without previous IS/TIA, patients with previous IS/TIA were at higher risk of both thromboembolism and bleeding (stroke/systemic embolic events 2.83% versus 1.42% per year; P<0.001; major bleeding 3.03% versus 2.64% per year; P<0.001; intracranial hemorrhage, 0.70% versus 0.40% per year; P<0.001). Among patients with previous IS/TIA, annualized intracranial hemorrhage rates were lower with HDER than with warfarin (0.62% versus 1.09%; absolute risk difference, 47 [8-85] per 10 000 patient-years; hazard ratio, 0.57; 95% confidence interval, 0.36-0.92; P=0.02). No treatment subgroup interactions were found for primary efficacy (P=0.86) or for intracranial hemorrhage (P=0.28).\n\n\nCONCLUSIONS\nPatients with atrial fibrillation with previous IS/TIA are at high risk of recurrent thromboembolism and bleeding. HDER is at least as effective and is safer than warfarin, regardless of the presence or the absence of previous IS or TIA.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: http://www.clinicaltrials.gov. Unique identifier: NCT00781391.", "title": "" }, { "docid": "99e8b7b6b883be51c5413c82ac1d5009", "text": "Named entities are usually composable and extensible. Typical examples are names of symptoms and diseases in medical areas. To distinguish these entities from general entities, we name them compound entities. In this paper, we present an attention-based Bi-GRU-CapsNet model to detect hypernymy relationship between compound entities. Our model consists of several important components. To avoid the out-of-vocabulary problem, English words or Chinese characters in compound entities are fed into the bidirectional gated recurrent units. An attention mechanism is designed to focus on the differences between two compound entities. Since there are some different cases in hypernymy relationship between compound entities, capsule network is finally employed to decide whether the hypernymy relationship exists or not. Experimental results demonstrate the advantages of our model over the state-of-theart methods both on English and Chinese corpora of symptom and disease pairs.", "title": "" }, { "docid": "3ad124875f073ff961aaf61af2832815", "text": "EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT\na perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This \"action simulation for auditory prediction\" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.", "title": "" }, { "docid": "2d340d004f81a9ed16ead41044103c5d", "text": "Bio-medical image segmentation is one of the promising sectors where nuclei segmentation from high-resolution histopathological images enables extraction of very high-quality features for nuclear morphometrics and other analysis metrics in the field of digital pathology. The traditional methods including Otsu thresholding and watershed methods do not work properly in different challenging cases. However, Deep Learning (DL) based approaches are showing tremendous success in different modalities of bio-medical imaging including computation pathology. Recently, the Recurrent Residual U-Net (R2U-Net) has been proposed, which has shown state-of-the-art (SOTA) performance in different modalities (retinal blood vessel, skin cancer, and lung segmentation) in medical image segmentation. However, in this implementation, the R2U-Net is applied to nuclei segmentation for the first time on a publicly available dataset that was collected from the Data Science Bowl Grand Challenge in 2018. The R2U-Net shows around 92.15% segmentation accuracy in terms of the Dice Coefficient (DC) during the testing phase. In addition, the qualitative results show accurate segmentation, which clearly demonstrates the robustness of the R2U-Net model for the nuclei segmentation task.", "title": "" }, { "docid": "7eb7cfc2ca574b0965008117cf7070d9", "text": "We present a framework, Atlas, which incorporates application-awareness into Software-Defined Networking (SDN), which is currently capable of L2/3/4-based policy enforcement but agnostic to higher layers. Atlas enables fine-grained, accurate and scalable application classification in SDN. It employs a machine learning (ML) based traffic classification technique, a crowd-sourcing approach to obtain ground truth data and leverages SDN's data reporting mechanism and centralized control. We prototype Atlas on HP Labs wireless networks and observe 94% accuracy on average, for top 40 Android applications.", "title": "" }, { "docid": "6506a8e0d2772a719f025982770d7eea", "text": "The choice of a particular NoSQL database imposes a specific distributed software architecture and data model, and is a major determinant of the overall system throughput. NoSQL database performance is in turn strongly influenced by how well the data model and query capabilities fit the application use cases, and so system-specific testing and characterization is required. This paper presents a method and the results of a study that selected among three NoSQL databases for a large, distributed healthcare organization. While the method and study considered consistency, availability, and partition tolerance (CAP) tradeoffs, and other quality attributes that influence the selection decision, this paper reports on the performance evaluation method and results. In our testing, a typical workload and configuration produced throughput that varied from 225 to 3200 operations per second between database products, while read operation latency varied by a factor of 5 and write latency by a factor of 4 (with the highest throughput product delivering the highest latency). We also found that achieving strong consistency reduced throughput by 10-25% compared to eventual consistency.", "title": "" }, { "docid": "36b0ace93b5a902966e96e4649d83b98", "text": "We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.", "title": "" }, { "docid": "3257f01d96bd126bd7e3d6f447e0326d", "text": "Voice SMS is an application developed in this work that allows a user to record and convert spoken messages into SMS text message. User can send messages to the entered phone number or the number of contact from the phonebook. Speech recognition is done via the Internet, connecting to Google's server. The application is adapted to input messages in English. Used tools are Android SDK and the installation is done on mobile phone with Android operating system. In this article we will give basic features of the speech recognition and used algorithm. Speech recognition for Voice SMS uses a technique based on hidden Markov models (HMM - Hidden Markov Model). It is currently the most successful and most flexible approach to speech recognition.", "title": "" }, { "docid": "2d0765e6b695348dea8822f695dcbfa1", "text": "Social networks are currently gaining increasing impact especially in the light of the ongoing growth of web-based services like facebook.com. A central challenge for the social network analysis is the identification of key persons within a social network. In this context, the article aims at presenting the current state of research on centrality measures for social networks. In view of highly variable findings about the quality of various centrality measures, we also illustrate the tremendous importance of a reflected utilization of existing centrality measures. For this purpose, the paper analyzes five common centrality measures on the basis of three simple requirements for the behavior of centrality measures.", "title": "" }, { "docid": "bde769df506e361bf374bd494fc5db6f", "text": "Molded interconnect devices (MID) allow the realization of electronic circuits on injection molded thermoplastics. MID antennas can be manufactured as part of device casings without the need for additional printed circuit boards or attachment of antennas printed on foil. Baluns, matching networks, amplifiers and connectors can be placed on the polymer in the vicinity of the antenna. A MID dipole antenna for 1 GHz is designed, manufactured and measured. A prototype of the antenna is built with laser direct structuring (LDS) on a Xantar LDS 3720 substrate. Measured return loss and calibrated gain patterns are compared to simulation results.", "title": "" }, { "docid": "4cfeef6e449e37219c75f8063220c1f8", "text": "The 20 century was based on local linear engineering of complicated systems. We made cars, airplanes and chemical plants for example. The 21ot century has opened a new basis for holistic non-linear design of complex systems, such as the Internet, air traffic management and nanotechnologies. Complexity, interconnectivity, interaction and communication are major attributes of our evolving society. But, more interestingly, we have started to understand that chaos theories may be more important than reductionism, to better understand and thrive on our planet. Systems need to be investigated and tested as wholes, which requires a cross-disciplinary approach and new conceptual principles and tools. Consequently, schools cannot continue to teach isolated disciplines based on simple reductionism. Science; Technology, Engineering, and Mathematics (STEM) should be integrated together with the Arts to promote creativity together with rationalization, and move to STEAM (with an \"A\" for Arts). This new concept emphasizes the possibility of longer-term socio-technical futures instead of short-term financial predictions that currently lead to uncontrolled economies. Human-centered design (HCD) can contribute to improving STEAM education technologies, systems and practices. HCD not only provides tools and techniques to build useful and usable things, but also an integrated approach to learning by doing, expressing and critiquing, exploring possible futures, and understanding complex systems.", "title": "" }, { "docid": "a75d3395a1d4859b465ccbed8647fbfe", "text": "PURPOSE\nThe influence of a core-strengthening program on low back pain (LBP) occurrence and hip strength differences were studied in NCAA Division I collegiate athletes.\n\n\nMETHODS\nIn 1998, 1999, and 2000, hip strength was measured during preparticipation physical examinations and occurrence of LBP was monitored throughout the year. Following the 1999-2000 preparticipation physicals, all athletes began participation in a structured core-strengthening program, which emphasized abdominal, paraspinal, and hip extensor strengthening. Incidence of LBP and the relationship with hip muscle imbalance were compared between consecutive academic years.\n\n\nRESULTS\nAfter incorporation of core strengthening, there was no statistically significant change in LBP occurrence. Side-to-side extensor strength between athletes participating in both the 1998-1999 and 1999-2000 physicals were no different. After core strengthening, the right hip extensor was, on average, stronger than that of the left hip extensor (P = 0.0001). More specific gender differences were noted after core strengthening. Using logistic regression, female athletes with weaker left hip abductors had a more significant probability of requiring treatment for LBP (P = 0.009)\n\n\nCONCLUSION\nThe impact of core strengthening on collegiate athletes has not been previously examined. These results indicated no significant advantage of core strengthening in reducing LBP occurrence, though this may be more a reflection of the small numbers of subjects who actually required treatment. The core program, however, seems to have had a role in modifying hip extensor strength balance. The association between hip strength and future LBP occurrence, observed only in females, may indicate the need for more gender-specific core programs. The need for a larger scale study to examine the impact of core strengthening in collegiate athletes is demonstrated.", "title": "" }, { "docid": "139ecd9ff223facaec69ad6532f650db", "text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.", "title": "" } ]
scidocsrr
cf5105829062cb5aa9769ca860d1d606
Waking and dreaming: Related but structurally independent. Dream reports of congenitally paraplegic and deaf-mute persons
[ { "docid": "63bd93cf0294d71db4aa0eb7b9a39fa2", "text": "Sleep researchers in different disciplines disagree about how fully dreaming can be explained in terms of brain physiology. Debate has focused on whether REM sleep dreaming is qualitatively different from nonREM (NREM) sleep and waking. A review of psychophysiological studies shows clear quantitative differences between REM and NREM mentation and between REM and waking mentation. Recent neuroimaging and neurophysiological studies also differentiate REM, NREM, and waking in features with phenomenological implications. Both evidence and theory suggest that there are isomorphisms between the phenomenology and the physiology of dreams. We present a three-dimensional model with specific examples from normally and abnormally changing conscious states.", "title": "" } ]
[ { "docid": "9ce3f1a67d23425e3920670ac5a1f9b4", "text": "We examine the limits of consistency in highly available and fault-tolerant distributed storage systems. We introduce a new property—convergence—to explore the these limits in a useful manner. Like consistency and availability, convergence formalizes a fundamental requirement of a storage system: writes by one correct node must eventually become observable to other connected correct nodes. Using convergence as our driving force, we make two additional contributions. First, we close the gap between what is known to be impossible (i.e. the consistency, availability, and partition-tolerance theorem) and known systems that are highly-available but that provide weaker consistency such as causal. Specifically, in an asynchronous system, we show that natural causal consistency, a strengthening of causal consistency that respects the real-time ordering of operations, provides a tight bound on consistency semantics that can be enforced without compromising availability and convergence. In an asynchronous system with Byzantine-failures, we show that it is impossible to implement many of the recently introduced forking-based consistency semantics without sacrificing either availability or convergence. Finally, we show that it is not necessary to compromise availability or convergence by showing that there exist practically useful semantics that are enforceable by available, convergent, and Byzantine-fault tolerant systems.", "title": "" }, { "docid": "93d4e6aba0ef5c17bb751ff93f0d3848", "text": "In this work we propose a new SIW structure, called the corrugated SIW (CSIW), which does not require conducting vias to achieve TE10 type boundary conditions at the side walls. Instead, the vias are replaced by quarter wavelength microstrip stubs arranged in a corrugated pattern on the edges of the waveguide. This, along with series interdigitated capacitors, results in a waveguide section comprising two separate conductors, which facilitates shunt connection of active components such as Gunn diodes.", "title": "" }, { "docid": "7ca863355d1fb9e4954c360c810ece53", "text": "The detection of community structure is a widely accepted means of investigating the principles governing biological systems. Recent efforts are exploring ways in which multiple data sources can be integrated to generate a more comprehensive model of cellular interactions, leading to the detection of more biologically relevant communities. In this work, we propose a mathematical programming model to cluster multiplex biological networks, i.e. multiple network slices, each with a different interaction type, to determine a single representative partition of composite communities. Our method, known as SimMod, is evaluated through its application to yeast networks of physical, genetic and co-expression interactions. A comparative analysis involving partitions of the individual networks, partitions of aggregated networks and partitions generated by similar methods from the literature highlights the ability of SimMod to identify functionally enriched modules. It is further shown that SimMod offers enhanced results when compared to existing approaches without the need to train on known cellular interactions.", "title": "" }, { "docid": "4d089acf0f7e1bae074fc4d9ad8ee7e3", "text": "The consequences of exodontia include alveolar bone resorption and ultimately atrophy to basal bone of the edentulous site/ridges. Ridge resorption proceeds quickly after tooth extraction and significantly reduces the possibility of placing implants without grafting procedures. The aims of this article are to describe the rationale behind alveolar ridge augmentation procedures aimed at preserving or minimizing the edentulous ridge volume loss. Because the goal of these approaches is to preserve bone, exodontia should be performed to preserve as much of the alveolar process as possible. After severance of the supra- and subcrestal fibrous attachment using scalpels and periotomes, elevation of the tooth frequently allows extraction with minimal socket wall damage. Extraction sockets should not be acutely infected and be completely free of any soft tissue fragments before any grafting or augmentation is attempted. Socket bleeding that mixes with the grafting material seems essential for success of this procedure. Various types of bone grafting materials have been suggested for this purpose, and some have shown promising results. Coverage of the grafted extraction site with wound dressing materials, coronal flap advancement, or even barrier membranes may enhance wound stability and an undisturbed healing process. Future controlled clinical trials are necessary to determine the ideal regimen for socket augmentation.", "title": "" }, { "docid": "8b060d80674bd3f329a675f1a3f4bce2", "text": "Smartphones are ubiquitous devices that offer endless possibilities for health-related applications such as Ambient Assisted Living (AAL). They are rich in sensors that can be used for Human Activity Recognition (HAR) and monitoring. The emerging problem now is the selection of optimal combinations of these sensors and existing methods to accurately and efficiently perform activity recognition in a resource and computationally constrained environment. To accomplish efficient activity recognition on mobile devices, the most discriminative features and classification algorithms must be chosen carefully. In this study, sensor fusion is employed to improve the classification results of a lightweight classifier. Furthermore, the recognition performance of accelerometer, gyroscope and magnetometer when used separately and simultaneously on a feature-level sensor fusion is examined to gain valuable knowledge that can be used in dynamic sensing and data collection. Six ambulatory activities, namely, walking, running, sitting, standing, walking upstairs and walking downstairs, are inferred from low-sensor data collected from the right trousers pocket of the subjects and feature selection is performed to further optimize resource use.", "title": "" }, { "docid": "2e475a64d99d383b85730e208703e654", "text": "—Detecting a variety of anomalies in computer network, especially zero-day attacks, is one of the real challenges for both network operators and researchers. An efficient technique detecting anomalies in real time would enable network operators and administrators to expeditiously prevent serious consequences caused by such anomalies. We propose an alternative technique, which based on a combination of time series and feature spaces, for using machine learning algorithms to automatically detect anomalies in real time. Our experimental results show that the proposed technique can work well for a real network environment, and it is a feasible technique with flexible capabilities to be applied for real-time anomaly detection.", "title": "" }, { "docid": "8d9246e7780770b5f7de9ef0adbab3e6", "text": "This paper proposes a self-adaption Kalman observer (SAKO) used in a permanent-magnet synchronous motor (PMSM) servo system. The proposed SAKO can make up measurement noise of the absolute encoder with limited resolution ratio and avoid differentiating process and filter delay of the traditional speed measuring methods. To be different from the traditional Kalman observer, the proposed observer updates the gain matrix by calculating the measurement noise at the current time. The variable gain matrix is used to estimate and correct the observed position, speed, and load torque to solve the problem that the motor speed calculated by the traditional methods is prone to large speed error and time delay when PMSM runs at low speeds. The state variables observed by the proposed observer are used as the speed feedback signals and compensation signal of the load torque disturbance in PMSM servo system. The simulations and experiments prove that the SAKO can observe speed and load torque precisely and timely and that the feedforward and feedback control system of PMSM can improve the speed tracking ability.", "title": "" }, { "docid": "0dafc618dbeb04c5ee347142d915a415", "text": "Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.", "title": "" }, { "docid": "757c7ede10552c51ad4e91bff275f96c", "text": "For several years, web caching has been used to meet the ever-increasing Web access loads. A fundamental capability of all such systems is that of inter-cache coordination, which can be divided into two main types: explicit and implicit coordination. While the former allows for greater control over resource allocation, the latter does not suffer from the additional communication overhead needed for coordination. In this paper, we consider a network in which each router has a local cache that caches files passing through it. By additionally storing minimal information regarding caching history, we develop a simple content caching, location, and routing systems that adopts an implicit, transparent, and best-effort approach towards caching. Though only best effort, the policy outperforms classic policies that allow explicit coordination between caches.", "title": "" }, { "docid": "64723e2bb073d0ba4412a9affef16107", "text": "The debate on the entrepreneurial university has raised questions about what motivates academics to engage with industry. This paper provides evidence, based on survey data for a comprehensive sample of UK investigators in the physical and engineering sciences. Our results suggest that most academics engage with industry to further their research rather than to commercialize their knowledge. However, there are differences in terms of the channels of engagement. While patenting and spin-off company formation is motivated exclusively by commercialization, joint research, contract research and consulting are strongly informed by research-related motives. We conclude that policy should refrain from focusing on monetary incentives for industry engagement and consider a broader range of incentives for promoting interaction between academia and industry.", "title": "" }, { "docid": "dc71b53847d33e82c53f0b288da89bfa", "text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.", "title": "" }, { "docid": "a064a4b8e19068526e417643788d0b04", "text": "Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios.", "title": "" }, { "docid": "2b942943bebdc891a4c9fa0f4ac65a4b", "text": "A new architecture based on the Multi-channel Convolutional Neural Network (MCCNN) is proposed for recognizing facial expressions. Two hard-coded feature extractors are replaced by a single channel which is partially trained in an unsupervised fashion as a Convolutional Autoencoder (CAE). One additional channel that contains a standard CNN is left unchanged. Information from both channels converges in a fully connected layer and is then used for classification. We perform two distinct experiments on the JAFFE dataset (leave-one-out and ten-fold cross validation) to evaluate our architecture. Our comparison with the previous model that uses hard-coded Sobel features shows that an additional channel of information with unsupervised learning can significantly boost accuracy and reduce the overall training time. Furthermore, experimental results are compared with benchmarks from the literature showing that our method provides state-of-the-art recognition rates for facial expressions. Our method outperforms previously published methods that used hand-crafted features by a large margin.", "title": "" }, { "docid": "b598cf655e2a039923163271fefb8ede", "text": "The 3GPP has recently published the first version of the Release 14 standard that includes support for V2V communications using LTE sidelink communications (referred to as LTE-V, LTE-V2X, LTE-V2V or Cellular V2X). The standard includes a mode (mode 4) where vehicles autonomously select and manage the radio resources without any cellular infrastructure support. This is highly relevant since V2V safety applications cannot depend on the availability of infrastructure-based cellular coverage, and transforms LTE-V into a possible (or complimentary) alternative to 802.11p. The performance of LTE-V in mode 4 is highly dependent on its distributed scheduling protocol (sensing-based Semi-Persistent Scheduling) that is used by vehicles to reserve resources for their transmissions. This paper presents the first evaluation of the performance and operation of this protocol under realistic traffic conditions in urban scenarios. The evaluation demonstrates that further enhancements should be investigated to reduce packet collisions.", "title": "" }, { "docid": "25305e33949beff196ff6c0946d1807b", "text": "Clinical and preclinical studies have gathered substantial evidence that stress response alterations play a major role in the development of major depression, panic disorder and posttraumatic stress disorder. The stress response, the hypothalamic pituitary adrenocortical (HPA) system and its modulation by CRH, corticosteroids and their receptors as well as the role of natriuretic peptides and neuroactive steroids are described. Examplarily, we review the role of the HPA system in major depression, panic disorder and posttraumatic stress disorder as well as its possible relevance for treatment. Impaired glucocorticoid receptor function in major depression is associated with an excessive release of neurohormones, like CRH to which a number of signs and symptoms characteristic of depression can be ascribed. In panic disorder, a role of central CRH in panic attacks has been suggested. Atrial natriuretic peptide (ANP) is causally involved in sodium lactate-induced panic attacks. Furthermore, preclinical and clinical data on its anxiolytic activity suggest that non-peptidergic ANP receptor ligands may be of potential use in the treatment of anxiety disorders. Recent data further suggest a role of 3alpha-reduced neuroactive steroids in major depression, panic attacks and panic disorder. Posttraumatic stress disorder is characterized by a peripheral hyporesponsive HPA-system and elevated CRH concentrations in CSF. This dissociation is probably related to an increased risk for this disorder. Antidepressants are effective both in depression and anxiety disorders and have major effects on the HPA-system, especially on glucocorticoid and mineralocorticoid receptors. Normalization of HPA-system abnormalities is a strong predictor of the clinical course, at least in major depression and panic disorder. CRH-R1 or glucorticoid receptor antagonists and ANP receptor agonists are currently being studied and may provide future treatment options more closely related to the pathophysiology of the disorders.", "title": "" }, { "docid": "ea304e700faa3d3cae4bff89cf01c397", "text": "Ternary logic is a promising alternative to the conventional binary logic in VLSI design as it provides the advantages of reduced interconnects, higher operating speeds, and smaller chip area. This paper presents a pair of circuits for implementing a ternary half adder using carbon nanotube field-effect transistors. The proposed designs combine both futuristic ternary and conventional binary logic design approach. One of the proposed circuits for ternary to binary decoder simplifies further circuit implementation and provides excellent delay and power advantages in data path circuit such as adder. These circuits have been extensively simulated using HSPICE to obtain power, delay, and power delay product. The circuit performances are compared with alternative designs reported in recent literature. One of the proposed ternary adders has been demonstrated power, power delay product improvement up to 63% and 66% respectively, with lesser transistor count. So, the use of these half adders in complex arithmetic circuits will be advantageous.", "title": "" }, { "docid": "5935224c53222d0234adffddae23eb04", "text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.", "title": "" }, { "docid": "9e8a1a70af4e52de46d773cec02f99a7", "text": "In this paper, we build a corpus of tweets from Twitter annotated with keywords using crowdsourcing methods. We identify key differences between this domain and the work performed on other domains, such as news, which makes existing approaches for automatic keyword extraction not generalize well on Twitter datasets. These datasets include the small amount of content in each tweet, the frequent usage of lexical variants and the high variance of the cardinality of keywords present in each tweet. We propose methods for addressing these issues, which leads to solid improvements on this dataset for this task.", "title": "" }, { "docid": "5e9f0743d7f913769967772038a85c01", "text": "A human listener has the remarkable ability to segregate an acoustic mixture and attend to a target sound. This perceptual process is called auditory scene analysis (ASA). Moreover, the listener can accomplish much of auditory scene analysis with only one ear. Research in ASA has inspired many studies in computational auditory scene analysis (CASA) for sound segregation. In this chapter we introduce a CASA approach to monaural speech segregation. After a brief overview of CASA, we present in detail a CASA system that segregates both voiced and unvoiced speech. Our description covers the major stages of CASA, including feature extraction, auditory segmentation, and grouping.", "title": "" }, { "docid": "b51c309fb2d77da3647739c41d71fd5a", "text": "We propose a benchmark for 6D pose estimation of a rigid object from a single RGB-D input image. The training data consists of a texture-mapped 3D object model or images of the object in known 6D poses. The benchmark comprises of: i) eight datasets in a unified format that cover different practical scenarios, including two new datasets focusing on varying lighting conditions, ii) an evaluation methodology with a pose-error function that deals with pose ambiguities, iii) a comprehensive evaluation of 15 diverse recent methods that captures the status quo of the field, and iv) an online evaluation system that is open for continuous submission of new results. The evaluation shows that methods based on point-pair features currently perform best, outperforming template matching methods, learning-based methods and methods based on 3D local features. The project website is available at bop.felk.cvut.cz.", "title": "" } ]
scidocsrr
46e4ea2c5d97473363c1a5aeca4866d0
A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models
[ { "docid": "a33cf416cf48f67cd0a91bf3a385d303", "text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.", "title": "" }, { "docid": "f1eb96dd2109aad21ac1bccfe8dcd012", "text": "In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations. Existing imitation learning algorithms typically involve solving a sequence of planning or reinforcement learning problems. Such algorithms are therefore not directly applicable to large, high-dimensional environments, and their performance can significantly degrade if the planning problems are not solved to optimality. Under the apprenticeship learning formalism, we develop alternative model-free algorithms for finding a parameterized stochastic policy that performs at least as well as an expert policy on an unknown cost function, based on sample trajectories from the expert. Our approach, based on policy gradients, scales to large continuous environments with guaranteed convergence to local minima.", "title": "" } ]
[ { "docid": "e1f76f158f0e96326c17a6a61f2072cb", "text": "In this paper, we propose a metric rectification method to restore an image from a single camera-captured document image. The core idea is to construct an isometric image mesh by exploiting the geometry of page surface and camera. Our method uses a general cylindrical surface (GCS) to model the curved page shape. Under a few proper assumptions, the printed horizontal text lines are shown to be line convergent symmetric. This property is then used to constrain the estimation of various model parameters under perspective projection. We also introduce a paraperspective projection to approximate the nonlinear perspective projection. A set of close-form formulas is thus derived for the estimate of GCS directrix and document aspect ratio. Our method provides a straightforward framework for image metric rectification. It is insensitive to camera positions, viewing angles, and the shapes of document pages. To evaluate the proposed method, we implemented comprehensive experiments on both synthetic and real-captured images. The results demonstrate the efficiency of our method. We also carried out a comparative experiment on the public CBDAR2007 data set. The experimental results show that our method outperforms the state-of-the-art methods in terms of OCR accuracy and rectification errors.", "title": "" }, { "docid": "66d584c242fb96527cef9b3b084d23a8", "text": "Online discussions boards represent a rich repository of knowledge organized in a collection of user generated content. These conversational cyberspaces allow users to express opinions, ideas and pose questions and answers without imposing strict limitations about the content. This freedom, in turn, creates an environment in which discussions are not bounded and often stray from the initial topic being discussed. In this paper we focus on approaches to assess the relevance of posts to a thread and detecting when discussions have been steered off-topic. A set of metrics estimating the level of novelty in online discussion posts are presented. These metrics are based on topical estimation and contextual similarity between posts within a given thread. The metrics are aggregated to rank posts based on the degree of relevance they maintain. The aggregation scheme is data-dependent and is normalized relative to the post length.", "title": "" }, { "docid": "7ff483824e208e892cd4ee50bb94e471", "text": "Gentle stroking touches are rated most pleasant when applied at a velocity of between 1-10 cm/s. Such touches are considered highly relevant in social interactions. Here, we investigate whether stroking sensations generated by a vibrotactile array can produce similar pleasantness responses, with the ultimate goal of using this type of haptic display in technology mediated social touch. A study was conducted in which participants received vibrotactile stroking stimuli of different velocities and intensities, applied to their lower arm. Results showed that the stimuli were perceived as continuous stroking sensations in a straight line. Furthermore, pleasantness ratings for low intensity vibrotactile stroking followed an inverted U-curve, similar to that found in research into actual stroking touches. The implications of these findings are discussed.", "title": "" }, { "docid": "14a8adf666b115ff4a72ff600432ff07", "text": "In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished. Radiology involves decision-making under conditions of uncertainty, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank).", "title": "" }, { "docid": "ac56eb533e3ae40b8300d4269fd2c08f", "text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.", "title": "" }, { "docid": "837b9d2834b72c7d917203457aafa421", "text": "The strongly nonlinear magnetic characteristic of Switched Reluctance Motors (SRMs) makes their torque control a challenging task. In contrast to standard current-based control schemes, we use Model Predictive Control (MPC) and directly manipulate the switches of the dc-link power converter. At each sampling time a constrained finite-time optimal control problem based on a discrete-time nonlinear prediction model is solved yielding a receding horizon control strategy. The control objective is torque regulation while winding currents and converter switching frequency are minimized. Simulations demonstrate that a good closed-loop performance is achieved already for short prediction horizons indicating the high potential of MPC in the control of SRMs.", "title": "" }, { "docid": "453d5d826e0292245f8fa12ec564c719", "text": "Work with patient H.M., beginning in the 1950s, established key principles about the organization of memory that inspired decades of experimental work. Since H.M., the study of human memory and its disorders has continued to yield new insights and to improve understanding of the structure and organization of memory. Here we review this work with emphasis on the neuroanatomy of medial temporal lobe and diencephalic structures important for memory, multiple memory systems, visual perception, immediate memory, memory consolidation, the locus of long-term memory storage, the concepts of recollection and familiarity, and the question of how different medial temporal lobe structures may contribute differently to memory functions.", "title": "" }, { "docid": "11761dbbb0ad3b523a7a565a14a476d8", "text": "Already in his first report on the discovery of the human EEG in 1929, Berger showed great interest in further elucidating the functional roles of the alpha and beta waves for normal mental activities. Meanwhile, most cognitive processes have been linked to at least one of the traditional frequency bands in the delta, theta, alpha, beta, and gamma range. Although the existing wealth of high-quality correlative EEG data led many researchers to the conviction that brain oscillations subserve various sensory and cognitive processes, a causal role can only be demonstrated by directly modulating such oscillatory signals. In this review, we highlight several methods to selectively modulate neuronal oscillations, including EEG-neurofeedback, rhythmic sensory stimulation, repetitive transcranial magnetic stimulation (rTMS), and transcranial alternating current stimulation (tACS). In particular, we discuss tACS as the most recent technique to directly modulate oscillatory brain activity. Such studies demonstrating the effectiveness of tACS comprise reports on purely behavioral or purely electrophysiological effects, on combination of behavioral effects with offline EEG measurements or on simultaneous (online) tACS-EEG recordings. Whereas most tACS studies are designed to modulate ongoing rhythmic brain activity at a specific frequency, recent evidence suggests that tACS may also modulate cross-frequency interactions. Taken together, the modulation of neuronal oscillations allows to demonstrate causal links between brain oscillations and cognitive processes and to obtain important insights into human brain function.", "title": "" }, { "docid": "66720892b48188c10d05937367dbd25e", "text": "In wireless sensor network (WSN) [1], energy efficiency is one of the very important issues. Protocols in WSNs are broadly classified as Hierarchical, Flat and Location Based routing protocols. Hierarchical routing is used to perform efficient routing in WSN. Here we concentrate on Hierarchical Routing protocols, different types of Hierarchical routing protocols, and PEGASIS (Power-Efficient Gathering in Sensor Information Systems) [2, 3] based routing", "title": "" }, { "docid": "5c9013c9514dc7deaa0b87fe9cd6db16", "text": "To predict the uses of new technology, we present an approach grounded in science and technology studies (STS) that examines the social uses of current technology. As part of ongoing research on next-generation mobile imaging applications, we conducted an empirical study of the social uses of personal photography. We identify three: memory, creating and maintaining relationships, and self-expression. The roles of orality and materiality in these uses help us explain the observed resistances to intangible digital images and to assigning metadata and annotations. We conclude that this approach is useful for understanding the potential uses of technology and for design.", "title": "" }, { "docid": "cc45fefcf65e5ab30d5bb68d390beb4c", "text": "In this paper, the basic running performance of the cylindrical tracked vehicle with sideways mobility is presented. The crawler mechanism is of circular cross-section and has active rolling axes at the center of the circles. Conventional crawler mechanisms can support massive loads, but cannot produce sideways motion. Additionally, previous crawler edges sink undesirably on soft ground, particularly when the vehicle body is subject to a sideways tilt. The proposed design solves these drawbacks by adopting a circular cross-section crawler. A prototype. Basic motion experiments with confirm the novel properties of this mechanism: sideways motion and robustness against edge-sink.", "title": "" }, { "docid": "d6976361b44aab044c563e75056744d6", "text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).", "title": "" }, { "docid": "7e17c1842a70e416f0a90bdcade31a8e", "text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.", "title": "" }, { "docid": "e52f5174a9d5161e18eced6e2eb36684", "text": "The clinical use of ivabradine has and continues to evolve along channels that are predicated on its mechanism of action. It selectively inhibits the funny current (If) in sinoatrial nodal tissue, resulting in a decrease in the rate of diastolic depolarization and, consequently, the heart rate, a mechanism that is distinct from those of other negative chronotropic agents. Thus, it has been evaluated and is used in select patients with systolic heart failure and chronic stable angina without clinically significant adverse effects. Although not approved for other indications, ivabradine has also shown promise in the management of inappropriate sinus tachycardia. Here, the authors review the mechanism of action of ivabradine and salient studies that have led to its current clinical indications and use.", "title": "" }, { "docid": "f5bd155887dd2e40ad2d7a26bb5a6391", "text": "The field of research in digital humanities is undergoing a rapid transformation in recent years. A deep reflection on the current needs of the agents involved that takes into account key issues such as the inclusion of citizens in the creation and consumption of the cultural resources offered, the volume and complexity of datasets, available infrastructures, etcetera, is necessary. Present technologies make it possible to achieve projects that were impossible until recently, but the field is currently facing the challenge of proposing frameworks and systems to generalize and reproduce these proposals in other knowledge domains with similar but heterogeneous data sets. The track \"New trends in digital humanities\" of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM 2016), tries to set the basis of good practice in digital humanities by reflecting on models, technologies and methods to carry the transformation out.", "title": "" }, { "docid": "5e194b5c1b14b423e955880de810eaba", "text": "A human body detection algorithm based on the combination of moving information with shape information is proposed in the paper. Firstly, Eigen-object computed from three frames in the initial video sequences is used to detect the moving object. Secondly, the shape information of human body is used to classify human and other object. Furthermore, the occlusion between two objects during a short time is processed by using continues multiple frames. The advantages of the algorithm are accurately moving object detection, and the detection result doesn't effect by body pose. Moreover, as the shadow of moving object has been eliminated.", "title": "" }, { "docid": "30ef95dffecc369aabdd0ea00b0ce299", "text": "The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.", "title": "" }, { "docid": "c543f7a65207e7de9cc4bc6fa795504a", "text": "Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.", "title": "" }, { "docid": "4072b14516d9a7b74bec64535cdb64d8", "text": "The idea of a unified citation index to the literature of science was first outlined by Eugene Garfield [1] in 1955 in the journal Science. Science Citation Index has since established itself as the gold standard for scientific information retrieval. It has also become the database of choice for citation analysts and evaluative bibliometricians worldwide. As scientific publication moves to the web, and novel approaches to scholarly communication and peer review establish themselves, new methods of citation and link analysis will emerge to capture often liminal expressions of peer esteem, influence and approbation. The web thus affords bibliometricians rich opportunities to apply and adapt their techniques to new contexts and content: the age of ‘bibliometric spectroscopy’ [2] is dawning.", "title": "" } ]
scidocsrr
9fa9f6e114c662ae25445f9caf004af2
Bitcoin-NG: A Scalable Blockchain Protocol
[ { "docid": "0f0799a04328852b8cfa742cbc2396c9", "text": "Bitcoin does not scale, because its synchronization mechanism, the blockchain, limits the maximum rate of transactions the network can process. However, using off-blockchain transactions it is possible to create long-lived channels over which an arbitrary number of transfers can be processed locally between two users, without any burden to the Bitcoin network. These channels may form a network of payment service providers (PSPs). Payments can be routed between any two users in real time, without any confirmation delay. In this work we present a protocol for duplex micropayment channels, which guarantees end-to-end security and allow instant transfers, laying the foundation of the PSP network.", "title": "" } ]
[ { "docid": "6e4d8bde993e88fa2c729d2fafb6fd90", "text": "The plant hormones gibberellin and abscisic acid regulate gene expression, secretion and cell death in aleurone. The emerging picture is of gibberellin perception at the plasma membrane whereas abscisic acid acts at both the plasma membrane and in the cytoplasm - although gibberellin and abscisic acid receptors have yet to be identified. A range of downstream-signalling components and events has been implicated in gibberellin and abscisic acid signalling in aleurone. These include the Galpha subunit of a heterotrimeric G protein, a transient elevation in cGMP, Ca2+-dependent and Ca2+-independent events in the cytoplasm, reversible protein phosphory-lation, and several promoter cis-elements and transcription factors, including GAMYB. In parallel, molecular genetic studies on mutants of Arabidopsis that show defects in responses to these hormones have identified components of gibberellin and abscisic acid signalling. These two approaches are yielding results that raise the possibility that specific gibberellin and abscisic acid signalling components perform similar functions in aleurone and other tissues.", "title": "" }, { "docid": "699c2891ce4988901f4b5a6b390906a3", "text": "In this work, we address the problem of cross-modal retrieval in presence of multi-label annotations. In particular, we introduce multi-label Canonical Correlation Analysis (ml-CCA), an extension of CCA, for learning shared subspaces taking into account high level semantic information in the form of multi-label annotations. Unlike CCA, ml-CCA does not rely on explicit pairing between modalities, instead it uses the multi-label information to establish correspondences. This results in a discriminative subspace which is better suited for cross-modal retrieval tasks. We also present Fast ml-CCA, a computationally efficient version of ml-CCA, which is able to handle large scale datasets. We show the efficacy of our approach by conducting extensive cross-modal retrieval experiments on three standard benchmark datasets. The results show that the proposed approach achieves state of the art retrieval performance on the three datasets.", "title": "" }, { "docid": "65ffbc6ee36ae242c697bb81ff3be23a", "text": "Full-duplex hands-free telecommunication systems employ an acoustic echo canceler (AEC) to remove the undesired echoes that result from the coupling between a loudspeaker and a microphone. Traditionally, the removal is achieved by modeling the echo path impulse response with an adaptive finite impulse response (FIR) filter and subtracting an echo estimate from the microphone signal. It is not uncommon that an adaptive filter with a length of 50-300 ms needs to be considered, which makes an AEC highly computationally expensive. In this paper, we propose an echo suppression algorithm to eliminate the echo effect. Instead of identifying the echo path impulse response, the proposed method estimates the spectral envelope of the echo signal. The suppression is done by spectral modification-a technique originally proposed for noise reduction. It is shown that this new approach has several advantages over the traditional AEC. Properties of human auditory perception are considered, by estimating spectral envelopes according to the frequency selectivity of the auditory system, resulting in improved perceptual quality. A conventional AEC is often combined with a post-processor to reduce the residual echoes due to minor echo path changes. It is shown that the proposed algorithm is insensitive to such changes. Therefore, no post-processor is necessary. Furthermore, the new scheme is computationally much more efficient than a conventional AEC.", "title": "" }, { "docid": "87343436b0ea16f9683360fd84506331", "text": "Accurate measurements of soil macronutrients (i.e., nitrogen, phosphorus, and potassium) are needed for efficient agricultural production, including site-specific crop management (SSCM), where fertilizer nutrient application rates are adjusted spatially based on local requirements. Rapid, non-destructive quantification of soil properties, including nutrient levels, has been possible with optical diffuse reflectance sensing. Another approach, electrochemical sensing based on ion-selective electrodes or ion-selective field effect transistors, has been recognized as useful in real-time analysis because of its simplicity, portability, rapid response, and ability to directly measure the analyte with a wide range of sensitivity. Current sensor developments and related technologies that are applicable to the measurement of soil macronutrients for SSCM are comprehensively reviewed. Examples of optical and electrochemical sensors applied in soil analyses are given, while advantages and obstacles to their adoption are discussed. It is proposed that on-the-go vehicle-based sensing systems have potential for efficiently and rapidly characterizing variability of soil macronutrients within a field.", "title": "" }, { "docid": "70bee569e694c92b79bd5e7dc586cbdc", "text": "Synchronous reluctance machines (SynRM) have been used widely in industries for instance, in ABB's new VSD product package based on SynRM technology. It is due to their unique merits such as high efficiency, fast dynamic response, and low cost. However, considering the major requirements for traction applications such as high torque and power density, low torque ripple, wide speed range, proper size, and capability of meeting a specific torque envelope, this machine is still under investigation to be developed for traction applications. Since the choice of motor for traction is generally determined by manufacturers with respect to three dominant factors: cost, weight, and size, the SynRM can be considered a strong alternative due to its high efficiency and lower cost. Hence, the machine's proper size estimation is a major step of the design process before attempting the rotor geometry design. This is crucial in passenger vehicles in which compactness is a requirement and the size and weight are indeed the design limitations. This paper presents a methodology for sizing a SynRM. The electric and magnetic parameters of the proposed machine in conjunction with the core dimensions are calculated. Then, the proposed method's validity and evaluation are done using FE analysis.", "title": "" }, { "docid": "c8482ed26ba2c4ba1bd3eed6ac0e00b4", "text": "Virtual Reality (VR) has now emerged as a promising tool in many domains of therapy and rehabilitation (Rizzo, Schultheis, Kerns & Mateer, 2004; Weiss & Jessel, 1998; Zimand, Anderson, Gershon, Graap, Hodges, & Rothbaum, 2002; Glantz, Rizzo & Graap, 2003). Continuing advances in VR technology along with concomitant system cost reductions have supported the development of more usable, useful, and accessible VR systems that can uniquely target a wide range of physical, psychological, and cognitive rehabilitation concerns and research questions. What makes VR application development in the therapy and rehabilitation sciences so distinctively important is that it represents more than a simple linear extension of existing computer technology for human use. VR offers the potential to create systematic human testing, training and treatment environments that allow for the precise control of complex dynamic 3D stimulus presentations, within which sophisticated interaction, behavioral tracking and performance recording is possible. Much like an aircraft simulator serves to test and train piloting ability, virtual environments (VEs) can be developed to present simulations that assess and rehabilitate human functional performance under a range of stimulus conditions that are not easily deliverable and controllable in the real world. When combining these assets within the context of functionally relevant, ecologically enhanced VEs, a fundamental advancement could emerge in how human functioning can be addressed in many rehabilitation disciplines.", "title": "" }, { "docid": "115e2a6c5f8fdd3a8a720fcdf0cf3a6d", "text": "In this work we present an Artificial Neural Network (ANN) approach to predict stock market indices. In particular, we focus our attention on their trend movement up or down. We provide results of experiments exploiting different Neural Networks architectures, namely the Multi-layer Perceptron (MLP), the Convolutional Neural Networks (CNN), and the Long Short-Term Memory (LSTM) recurrent neural networks technique. We show importance of choosing correct input features and their preprocessing for learning algorithm. Finally we test our algorithm on the S&P500 and FOREX EUR/USD historical time series, predicting trend on the basis of data from the past n days, in the case of S&P500, or minutes, in the FOREX framework. We provide a novel approach based on combination of wavelets and CNN which outperforms basic neural networks approaches. Key–Words: Artificial neural networks, Multi-layer neural network, Convolutional neural network, Long shortterm memory, Recurrent neural network, Deep Learning, Stock markets, Time series analysis, financial forecasting", "title": "" }, { "docid": "b3c9bc55f5a9d64a369ec67e1364c4fc", "text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.", "title": "" }, { "docid": "da3634b5a14829b22546389e56425017", "text": "Homomorphic encryption (HE)—the ability to perform computations on encrypted data—is an attractive remedy to increasing concerns about data privacy in the field of machine learning. However, building models that operate on ciphertext is currently labor-intensive and requires simultaneous expertise in deep learning, cryptography, and software engineering. Deep learning frameworks, together with recent advances in graph compilers, have greatly accelerated the training and deployment of deep learning models to a variety of computing platforms. Here, we introduce nGraph-HE, an extension of the nGraph deep learning compiler, which allows data scientists to deploy trained models with popular frameworks like TensorFlow, MXNet and PyTorch directly, while simply treating HE as another hardware target. This combination of frameworks and graph compilers greatly simplifies the development of privacy-preserving machine learning systems, provides a clean abstraction barrier between deep learning and HE, allows HE libraries to exploit HE-specific graph optimizations, and comes at a low cost in runtime overhead versus native HE operations.", "title": "" }, { "docid": "fd3dd59550806b93a625f6e6750e888f", "text": "Location-based services have become widely available on mobile devices. Existing methods employ a pull model or user-initiated model, where a user issues a query to a server which replies with location-aware answers. To provide users with instant replies, a push model or server-initiated model is becoming an inevitable computing model in the next-generation location-based services. In the push model, subscribers register spatio-textual subscriptions to capture their interests, and publishers post spatio-textual messages. This calls for a high-performance location-aware publish/subscribe system to deliver publishers' messages to relevant subscribers.In this paper, we address the research challenges that arise in designing a location-aware publish/subscribe system. We propose an rtree based index structure by integrating textual descriptions into rtree nodes. We devise efficient filtering algorithms and develop effective pruning techniques to improve filtering efficiency. Experimental results show that our method achieves high performance. For example, our method can filter 500 tweets in a second for 10 million registered subscriptions on a commodity computer.", "title": "" }, { "docid": "b62b8862d26e5ce5bcbd2b434aff5d0e", "text": "In this demo paper we present Docear's research paper recommender system. Docear is an academic literature suite to search, organize, and create research articles. The users' data (papers, references, annotations, etc.) is managed in mind maps and these mind maps are utilized for the recommendations. Using content-based filtering methods, Docear's recommender achieves click-through rates around 6%, in some scenarios even over 10%.", "title": "" }, { "docid": "ed0d234b961befcffab751f70f5c5fdb", "text": "UNLABELLED\nA challenging aspect of managing patients on venoarterial extracorporeal membrane oxygenation (V-A ECMO) is a thorough understanding of the relationship between oxygenated blood from the ECMO circuit and blood being pumped from the patient's native heart. We present an adult V-A ECMO case report, which illustrates a unique encounter with the concept of \"dual circulations.\" Despite blood gases from the ECMO arterial line showing respiratory acidosis, this patient with cardiogenic shock demonstrated regional respiratory alkalosis when blood was sampled from the right radial arterial line. In response, a sample was obtained from the left radial arterial line, which mimicked the ECMO arterial blood but was dramatically different from the blood sampled from the right radial arterial line. A retrospective analysis of patient data revealed that the mismatch of blood gas values in this patient corresponded to an increased pulse pressure. Having three arterial blood sampling sites and data on the patient's pulse pressure provided a dynamic view of blood mixing and guided proper management, which contributed to a successful patient outcome that otherwise may not have occurred. As a result of this unique encounter, we created and distributed graphics representing the concept of \"dual circulations\" to facilitate the education of ECMO specialists at our institution.\n\n\nKEYWORDS\nECMO, education, cardiopulmonary bypass, cannulation.", "title": "" }, { "docid": "508eb69a9e6b0194fbda681439e404c4", "text": "Price forecasting is becoming increasingly relevant to producers and consumers in the new competitive electric power markets. Both for spot markets and long-term contracts, price forecasts are necessary to develop bidding strategies or negotiation skills in order to maximize benefit. This paper provides a method to predict next-day electricity prices based on the ARIMA methodology. ARIMA techniques are used to analyze time series and, in the past, have been mainly used for load forecasting due to their accuracy and mathematical soundness. A detailed explanation of the aforementioned ARIMA models and results from mainland Spain and Californian markets are presented.", "title": "" }, { "docid": "6e5792c73b34eacc7bef2c8777da5147", "text": "Neural network machine translation systems have recently demonstrated encouraging results. We examine the performance of a recently proposed recurrent neural network model for machine translation on the task of Japanese-to-English translation. We observe that with relatively little training the model performs very well on a small hand-designed parallel corpus, and adapts to grammatical complexity with ease, given a small vocabulary. The success of this model on a small corpus warrants more investigation of its performance on a larger corpus.", "title": "" }, { "docid": "9280eb309f7a6274eb9d75d898768f56", "text": "In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.", "title": "" }, { "docid": "f519d349d928e7006955943043ab0eae", "text": "A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.", "title": "" }, { "docid": "aec7ed67f393650953c5dc99d0d66a38", "text": "BACKGROUND\nThe pes cavus deformity has been well described in the literature; relative bony positions have been determined and specific muscle imbalances have been summarized. However, we are unaware of a cadaveric model that has been used to generate this foot pathology. The purpose of this study was to create such a model for future work on surgical and conservative treatment simulation.\n\n\nMATERIALS AND METHODS\nWe used a custom designed, pneumatically actuated loading frame to apply forces to otherwise normal cadaveric feet while measuring bony motion as well as force beneath the foot. The dorsal tarsometatarsal and the dorsal intercuneiform ligaments were attenuated and three muscle imbalances, each similar to imbalances believed to cause the pes cavus deformity, were applied while bony motion and plantar forces were measured.\n\n\nRESULTS\nOnly one of the muscle imbalances (overpull of the Achilles tendon, tibialis anterior, tibialis posterior, flexor hallucis longus and flexor digitorum longus) was successful at consistently generating the changes seen in pes cavus feet. This imbalance led to statistically significant changes including hindfoot inversion, talar dorsiflexion, medial midfoot plantar flexion and inversion, forefoot plantar flexion and adduction and an increase in force on the lateral mid- and forefoot.\n\n\nCONCLUSION\nWe have created a cadaveric model that approximates the general changes of the pes cavus deformity compared to normal feet. These changes mirror the general patterns of deformity produced by several disease mechanisms.\n\n\nCLINICAL RELEVANCE\nFuture work will entail increasing the severity of the model and exploring various pes cavus treatment strategies.", "title": "" }, { "docid": "b1a69a47cce9ecc51b03d8b4a306e605", "text": "We use an innovative survey tool to collect management practice data from 732 medium sized manufacturing firms in the US and Europe (France, Germany and the UK). Our measures of managerial best practice are strongly associated with superior firm performance in terms of productivity, profitability, Tobin’s Q, sales growth and survival. We also find significant intercountry variation with US firms on average better managed than European firms, but a much greater intra-country variation with a long tail of extremely badly managed firms. This presents a dilemma – why do so many firms exist with apparently inferior management practices, and why does this vary so much across countries? We find this is due to a combination of: (i) low product market competition and (ii) family firms passing management control down to the eldest sons (primo geniture). European firms in our sample report facing lower levels of competition, and substantially higher levels of primo geniture. These two factors appear to account for around half of the long tail of badly managed firms and half of the average US-Europe gap in management performance.", "title": "" }, { "docid": "745451b3ca65f3388332232b370ea504", "text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.", "title": "" }, { "docid": "c18cec45829e4aec057443b9da0eeee5", "text": "This paper presents a synthesis on the application of fuzzy integral as an innovative tool for criteria aggregation in decision problems. The main point is that fuzzy integrals are able to model interaction between criteria in a flexible way. The methodology has been elaborated mainly in Japan, and has been applied there successfully in various fields such as design, reliability, evaluation of goods, etc. It seems however that this technique is still very little known in Europe. It is one of the aim of this review to disseminate this emerging technology in many industrial fields.", "title": "" } ]
scidocsrr
7499476ab60378a53aa9ef1585b520c4
From Temporary Competitive Advantage to Sustainable Competitive Advantage
[ { "docid": "7a9c163d5efbe1bf1d7178bb5d7116a0", "text": "This paper examines interfirm knowledge transfers within strategic alliances. Using a new measure of changes in alliance partners' technological capabilities, based on the citation patterns of their patent portfolios. we analyze changes in the extent to which partner firms' technological resources 'overlap' as a result of alliance participation. This measure allows us to test hypothesesfrom the literature on interfirm knowledge transfer in alliances, with interesting results: we find support for some elements of this 'received wisdom'-equity arrangements promote greater knowledge transfer, and 'absorptive capacity' helps explain the extent of technological capability transfer, at least in some alliances. But the results also suggest limits to the 'capabilities acquisition' view of strategic alliances. Consistent with the argument that alliance activity can promote increased specialization, we find that the capabilities of partner firms become more divergent in a substantial subset of alliances.", "title": "" } ]
[ { "docid": "007a42bdf781074a2d00d792d32df312", "text": "This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear-parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima-minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions.", "title": "" }, { "docid": "485484dcbb0113e9971ad4d37802cf59", "text": "Due to the rise of businesses utilizing the sharing economy concept, it is important to better understand the motivational factors that drive and hinder collaborative consumption in the travel and tourism marketplace. Based on responses from 754 adult travellers residing in the US, drivers and deterrents of the use of peer-to-peer accommodation rental services were identified. Factors that deter the use of peer-to-peer accommodation rental services include lack of trust, lack of efficacy with regards to technology, and lack of economic benefits. The motivations that drive the use of peer-to-peer accommodation include the societal aspects of sustainability and community, as well as economic benefits. Based on the empirical evidence, this study suggests several propositions for future studies and implications for tourism destinations and hospitality businesses on how to manage collaborative consumption.", "title": "" }, { "docid": "7a7b8d92cea993b3d2794f43eb8e448d", "text": "This article investigates the impact of user homophily on the social process of information diffusion in online social media. Over several decades, social scientists have been interested in the idea that similarity breeds connection—precisely known as “homophily”. “Homophily”, has been extensively studied in the social sciences and refers to the idea that users in a social system tend to bond more with ones who are “similar” to them than to ones who are dissimilar. The key observation is that homophily structures the ego-networks of individuals and impacts their communication behavior. It is therefore likely to effect the mechanisms in which information propagates among them. To this effect, we investigate the interplay between homophily along diverse user attributes and the information diffusion process on social media. Our approach has three steps. First we extract several diffusion characteristics along categories such as user-based (volume, number of seeds), topology-based (reach, spread) and time (rate)—corresponding to the baseline social graph as well as graphs filtered on different user attributes (e.g. location, activity behavior). Second, we propose a Dynamic Bayesian Network based framework to predict diffusion characteristics at a future time slice. Third, the impact of attribute homophily is quantified by the ability of the predicted characteristics in explaining actual diffusion, and external temporal variables, including trends in search and news. Experimental results on a large Twitter dataset are promising and demonstrate that the choice of the homophilous attribute can impact the prediction of information diffusion, given a specific metric and a topic. In most cases, attribute homophily is able to explain the actual diffusion and external trends by ∼ 15 − 25% over cases when homophily is not considered. Our method also outperforms baseline techniques in predicting diffusion characteristics subject to homophily, by ∼ 13 − 50%. ∗School of Computing, Informatics & Decision Systems Engineering, Arizona State University, Tempe, Arizona, USA. (munmun@asu.edu). †School of Arts, Media & Engineering, Arizona State University, Tempe, Arizona, USA. (hari.sundaram@asu.edu). ‡Collaborative Applications Research, Avaya Labs Research, Basking Ridge, New Jersey, USA. (ajita@avaya.com). §Collaborative Applications Research, Avaya Labs Research, Basking Ridge, New Jersey, USA. (doree@avaya.com). ¶School of Arts, Media & Engineering, Arizona State University, Tempe, Arizona, USA. (aisling.kelliher@asu.edu). 1 ar X iv :1 00 6. 17 02 v1 [ cs .C Y ] 9 J un 2 01 0", "title": "" }, { "docid": "be7d32aeffecc53c5d844a8f90cd5ce0", "text": "Wordnets play a central role in many natural language processing tasks. This paper introduces a multilingual editing system for the Open Multilingual Wordnet (OMW: Bond and Foster, 2013). Wordnet development, like most lexicographic tasks, is slow and expensive. Moving away from the original Princeton Wordnet (Fellbaum, 1998) development workflow, wordnet creation and expansion has increasingly been shifting towards an automated and/or interactive system facilitated task. In the particular case of human edition/expansion of wordnets, a few systems have been developed to aid the lexicographers’ work. Unfortunately, most of these tools have either restricted licenses, or have been designed with a particular language in mind. We present a webbased system that is capable of multilingual browsing and editing for any of the hundreds of languages made available by the OMW. All tools and guidelines are freely available under an open license.", "title": "" }, { "docid": "8ee0764d45e512bfc6b0273f7e90d2c1", "text": "This work introduces a new dataset and framework for the exploration of topological data analysis (TDA) techniques applied to time-series data. We examine the end-toend TDA processing pipeline for persistent homology applied to time-delay embeddings of time series – embeddings that capture the underlying system dynamics from which time series data is acquired. In particular, we consider stability with respect to time series length, the approximation accuracy of sparse filtration methods, and the discriminating ability of persistence diagrams as a feature for learning. We explore these properties across a wide range of time-series datasets spanning multiple domains for single source multi-segment signals as well as multi-source single segment signals. Our analysis and dataset captures the entire TDA processing pipeline and includes time-delay embeddings, persistence diagrams, topological distance measures, as well as kernels for similarity learning and classification tasks for a broad set of time-series data sources. We outline the TDA framework and rationale behind the dataset and provide insights into the role of TDA for time-series analysis as well as opportunities for new work.", "title": "" }, { "docid": "025953bb13772965bd757216f58d2bed", "text": "Designers use third-party intellectual property (IP) cores and outsource various steps in their integrated circuit (IC) design flow, including fabrication. As a result, security vulnerabilities have been emerging, forcing IC designers and end-users to reevaluate their trust in hardware. If an attacker gets hold of an unprotected design, attacks such as reverse engineering, insertion of malicious circuits, and IP piracy are possible. In this paper, we shed light on the vulnerabilities in very large scale integration (VLSI) design and fabrication flow, and survey design-for-trust (DfTr) techniques that aim at regaining trust in IC design. We elaborate on four DfTr techniques: logic encryption, split manufacturing, IC camouflaging, and Trojan activation. These techniques have been developed by reusing VLSI test principles.", "title": "" }, { "docid": "a90909570959ade87dd46186a0990a9e", "text": "DNA methylation is among the best studied epigenetic modifications and is essential to mammalian development. Although the methylation status of most CpG dinucleotides in the genome is stably propagated through mitosis, improvements to methods for measuring methylation have identified numerous regions in which it is dynamically regulated. In this Review, we discuss key concepts in the function of DNA methylation in mammals, stemming from more than two decades of research, including many recent studies that have elucidated when and where DNA methylation has a regulatory role in the genome. We include insights from early development, embryonic stem cells and adult lineages, particularly haematopoiesis, to highlight the general features of this modification as it participates in both global and localized epigenetic regulation.", "title": "" }, { "docid": "43ac7e674624615c9906b2bd58b72b7b", "text": "OBJECTIVE\nTo develop a method enabling human-like, flexible supervisory control via delegation to automation.\n\n\nBACKGROUND\nReal-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance.\n\n\nMETHOD\nWe review problems with static and adaptive (as opposed to \"adaptable\") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a \"level of automation\" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described.\n\n\nRESULTS\nOn the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's \"playbook.\" Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles.\n\n\nCONCLUSION\nDelegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level.\n\n\nAPPLICATION\nMost applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.", "title": "" }, { "docid": "4f5e3933100a8dcec75ceb058faaa481", "text": "Reinforced Concrete Frames are the most commonly adopted buildings construction practices in India. With growing economy, urbanisation and unavailability of horizontal space increasing cost of land and need for agricultural land, high-rise sprawling structures have become highly preferable in Indian buildings scenario, especially in urban. With high-rise structures, not only the building has to take up gravity loads, but as well as lateral forces. Many important Indian cities fall under high risk seismic zones, hence strengthening of buildings for lateral forces is a prerequisite. In this study the aim is to analyze the response of a high-rise structure to ground motion using Response Spectrum Analysis. Different models, that is, bare frame, brace frame and shear wall frame are considered in Staad Pro. and change in the time period, stiffness, base shear, storey drifts and top-storey deflection of the building is observed and compared.", "title": "" }, { "docid": "b7dfec026a9fe18eb2cd8bdfd6cfa416", "text": "Based on the hypothesis that frame-semantic parsing and event extraction are structurally identical tasks, we retrain SEMAFOR, a stateof-the-art frame-semantic parsing system to predict event triggers and arguments. We describe how we change SEMAFOR to be better suited for the new task and show that it performs comparable to one of the best systems in event extraction. We also describe a bias in one of its models and propose a feature factorization which is better suited for this model.", "title": "" }, { "docid": "a252ec33139d9489133b91c2551a694f", "text": "The lucrative rewards of security penetrations into large organizations have motivated the development and use of many sophisticated rootkit techniques to maintain an attacker's presence on a compromised system. Due to the evasive nature of such infections, detecting these rootkit infestations is a problem facing modern organizations. While many approaches to this problem have been proposed, various drawbacks that range from signature generation issues, to coverage, to performance, prevent these approaches from being ideal solutions.\n In this paper, we present Blacksheep, a distributed system for detecting a rootkit infestation among groups of similar machines. This approach was motivated by the homogenous natures of many corporate networks. Taking advantage of the similarity amongst the machines that it analyses, Blacksheep is able to efficiently and effectively detect both existing and new infestations by comparing the memory dumps collected from each host.\n We evaluate Blacksheep on two sets of memory dumps. One set is taken from virtual machines using virtual machine introspection, mimicking the deployment of Blacksheep on a cloud computing provider's network. The other set is taken from Windows XP machines via a memory acquisition driver, demonstrating Blacksheep's usage under more challenging image acquisition conditions. The results of the evaluation show that by leveraging the homogeneous nature of groups of computers, it is possible to detect rootkit infestations.", "title": "" }, { "docid": "6f9be23e33910d44551b5befa219e557", "text": "The Lecture Notes are used for the a short course on the theory and applications of the lattice Boltzmann methods for computational uid dynamics taugh by the author at Institut f ur Computeranwendungen im Bauingenieurwesen (CAB), Technischen Universitat Braunschweig, during August 7 { 12, 2003. The lectures cover the basic theory of the lattice Boltzmann equation and its applications to hydrodynamics. Lecture One brie y reviews the history of the lattice gas automata and the lattice Boltzmann equation and their connections. Lecture Two provides an a priori derivation of the lattice Boltzmann equation, which connects the lattice Boltzmann equation to the continuous Boltzmann equation and demonstrates that the lattice Boltzmann equation is indeed a special nite di erence form of the Boltzmann equation. Lecture Two also includes the derivation of the lattice Boltzmann model for nonideal gases from the Enskog equation for dense gases. Lecture Three studies the generalized lattice Boltzmann equation with multiple relaxation times. A summary is provided at the end of each Lecture. Lecture Four discusses the uid-solid boundary conditions in the lattice Boltzmann methods. Applications of the lattice Boltzmann mehod to particulate suspensions, turbulence ows, and other ows are also shown. An Epilogue on the rationale of the lattice Boltzmann method is given. Some key references in the literature is also provided.", "title": "" }, { "docid": "068381a40679de50f0a8cdb4be50a2a2", "text": "The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.", "title": "" }, { "docid": "830abfc28745f469cd24bb730111afcb", "text": "User interface (UI) is point of interaction between user and computer software. The success and failure of a software application depends on User Interface Design (UID). Possibility of using a software, easily using and learning are issues influenced by UID. The UI is significant in designing of educational software (e-Learning). Principles and concepts of learning should be considered in addition to UID principles in UID for e-learning. In this regard, to specify the logical relationship between education, learning, UID and multimedia at first we readdress the issues raised in previous studies. It is followed by examining the principle concepts of e-learning and UID. Then, we will see how UID contributes to e-learning through the educational software built by authors. Also we show the way of using UI to improve learning and motivating the learners and to improve the time efficiency of using e-learning software. Keywords—e-Learning, User Interface Design, Self learning, Educational Multimedia", "title": "" }, { "docid": "7c5f1b12f540c8320587ead7ed863ee5", "text": "This paper studies the non-fragile mixed H∞ and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.", "title": "" }, { "docid": "8d08a464c75a8da6de159c0f0e46d447", "text": "A License plate recognition (LPR) system can be divided into the following steps: preprocessing, plate region extraction, plate region thresholding, character segmentation, character recognition and post-processing. For step 2, a combination of color and shape information of plate is used and a satisfactory extraction result is achieved. For step 3, first channel is selected, then threshold is computed and finally the region is thresholded. For step 4, the character is segmented along vertical, horizontal direction and some tentative optimizations are applied. For step 5, minimum Euclidean distance based template matching is used. And for those confusing characters such as '8' & 'B' and '0' & 'D', a special processing is necessary. And for the final step, validity is checked by machine and manual. The experiment performed by program based on aforementioned algorithms indicates that our LPR system based on color image processing is quite quick and accurate.", "title": "" }, { "docid": "a6499aad878777373006742778145ddb", "text": "The very term 'Biotechnology' elicits a range of emotions, from wonder and awe to downright fear and hostility. This is especially true among non-scientists, particularly in respect of agricultural and food biotechnology. These emotions indicate just how poorly understood agricultural biotechnology is and the need for accurate, dispassionate information in the public sphere to allow a rational public debate on the actual, as opposed to the perceived, risks and benefits of agricultural biotechnology. This review considers first the current state of public knowledge on agricultural biotechnology, and then explores some of the popular misperceptions and logical inconsistencies in both Europe and North America. I then consider the problem of widespread scientific illiteracy, and the role of the popular media in instilling and perpetuating misperceptions. The impact of inappropriate efforts to provide 'balance' in a news story, and of belief systems and faith also impinges on public scientific illiteracy. Getting away from the abstract, we explore a more concrete example of the contrasting approach to agricultural biotechnology adoption between Europe and North America, in considering divergent approaches to enabling coexistence in farming practices. I then question who benefits from agricultural biotechnology. Is it only the big companies, or is it society at large--and the environment--also deriving some benefit? Finally, a crucial aspect in such a technologically complex issue, ordinary and intelligent non-scientifically trained consumers cannot be expected to learn the intricacies of the technology to enable a personal choice to support or reject biotechnology products. The only reasonable and pragmatic alternative is to place trust in someone to provide honest advice. But who, working in the public interest, is best suited to provide informed and accessible, but objective, advice to wary consumers?", "title": "" }, { "docid": "6522a164502dbefa1e915dacc53e8a94", "text": "Whilst the future for social media in chronic disease management appears to be optimistic, there is limited concrete evidence indicating whether and how social media use significantly improves patient outcomes. This review examines the health outcomes and related effects of using social media, while also exploring the unique affordances underpinning these effects. Few studies have investigated social media's potential in chronic disease, but those we found indicate impact on health status and other effects are positive, with none indicating adverse events. Benefits have been reported for psychosocial management via the ability to foster support and share information; however, there is less evidence of benefits for physical condition management. We found that studies covered a very limited range of social media platforms and that there is an ongoing propensity towards reporting investigations of earlier social platforms, such as online support groups (OSG), discussion forums and message boards. Finally, it is hypothesized that for social media to form a more meaningful part of effective chronic disease management, interventions need to be tailored to the individualized needs of sufferers. The particular affordances of social media that appear salient in this regard from analysis of the literature include: identity, flexibility, structure, narration and adaptation. This review suggests further research of high methodological quality is required to investigate the affordances of social media and how these can best serve chronic disease sufferers. Evidence-based practice (EBP) using social media may then be considered.", "title": "" }, { "docid": "706bf586392b754863060542cbd77fa3", "text": "SAX (Symbolic Aggregate approXimation) is one of the main symbolization technique for time series. A well-known limitation of SAX is that trends are not taken into account in the symbolization. This paper proposes 1d-SAX a method to represent a time series as a sequence of symbols that contain each an information about the average and the trend of the series on a segment. We compare the efficiency of SAX and 1d-SAX in terms of i) goodness-of-fit and ii) retrieval performance for querying a time series database with an asymmetric scheme. The results show that 1d-SAX improves retrieval performance using equal quantity of information, especially when the compression rate increases.", "title": "" }, { "docid": "29199ac45d4aa8035fd03e675406c2cb", "text": "This work presents an autonomous mobile robot in order to cover an unknown terrain “randomly”, namely entirely, unpredictably and evenly. This aim is very important, especially in military missions, such as the surveillance of terrains, the terrain exploration for explosives and the patrolling for intrusion in military facilities. The “heart” of the proposed robot is a chaotic motion controller, which is based on a chaotic true random bit generator. This generator has been implemented with a microcontroller, which converts the produced chaotic bit sequence, to the robot's motion. Experimental results confirm that this approach, with an appropriate sensor for obstacle avoidance, can obtain very satisfactory results in regard to the fast scanning of the robot’s workspace with unpredictable way. Key-Words: Autonomous mobile robot, terrain coverage, microcontroller, random bit generator, nonlinear system, chaos, Logistic map.", "title": "" } ]
scidocsrr
20a77d955a7015fd6a195968a0e8bfa9
The effect of egocentric body movements on users' navigation performance and spatial memory in zoomable user interfaces
[ { "docid": "2b9733f936f39d0bb06b8f89a95f31e4", "text": "In order to improve the three-dimensional (3D) exploration of virtual spaces above a tabletop, we developed a set of navigation techniques using a handheld magic lens. These techniques allow for an intuitive interaction with two-dimensional and 3D information spaces, for which we contribute a classification into volumetric, layered, zoomable, and temporal spaces. The proposed PaperLens system uses a tracked sheet of paper to navigate these spaces with regard to the Z-dimension (height above the tabletop). A formative user study provided valuable feedback for the improvement of the PaperLens system with respect to layer interaction and navigation. In particular, the problem of keeping the focus on selected layers was addressed. We also propose additional vertical displays in order to provide further contextual clues.", "title": "" } ]
[ { "docid": "fe407f4983ef6cc2e257d63a173c8487", "text": "We present a semantically rich graph representation for indoor robotic navigation. Our graph representation encodes: semantic locations such as offices or corridors as nodes, and navigational behaviors such as enter office or cross a corridor as edges. In particular, our navigational behaviors operate directly from visual inputs to produce motor controls and are implemented with deep learning architectures. This enables the robot to avoid explicit computation of its precise location or the geometry of the environment, and enables navigation at a higher level of semantic abstraction. We evaluate the effectiveness of our representation by simulating navigation tasks in a large number of virtual environments. Our results show that using a simple sets of perceptual and navigational behaviors, the proposed approach can successfully guide the way of the robot as it completes navigational missions such as going to a specific office. Furthermore, our implementation shows to be effective to control the selection and switching of behaviors.", "title": "" }, { "docid": "b78f1e6a5e93c1ad394b1cade293829f", "text": "This paper presents a novel approach for creation of topographical function and object markers used within watershed segmentation. Typically, marker-driven watershed segmentation extracts seeds indicating the presence of objects or background at specific image locations. The marker locations are then set to be regional minima within the topological surface (typically, the gradient of the original input image), and the watershed algorithm is applied. In contrast, our approach uses two classifiers, one trained to produce markers, the other trained to produce object boundaries. As a result of using machine-learned pixel classification, the proposed algorithm is directly applicable to both single channel and multichannel image data. Additionally, rather than flooding the gradient image, we use the inverted probability map produced by the second aforementioned classifier as input to the watershed algorithm. Experimental results demonstrate the superior performance of the classification-driven watershed segmentation algorithm for the tasks of 1) image-based granulometry and 2) remote sensing", "title": "" }, { "docid": "218ca177bf3a5b78482b2064608505fc", "text": "Wideband dual-polarization performance is desired for low-noise receivers and radiom eters at cent imete r and m illimeter wavelengths. The use of a waveguide orthomode transducer (OMT) can increase spectral coverage and sensitivity while reducing exit aperture size, optical spill, and instrumental polarization offsets. For these reasons, an orthomode junction is favored over a traditional quasi-op tical wire grid for focal plane imaging arrays from a systems perspective. The fabrication and pe rformance o f wideban d symm etric Bøifot OM T junctions at K -, Ka-, Q-, and W-bands are described. Typical WR10.0 units have an insertion loss of <0.2 dB , return loss ~20dB, and >40dB isolation over a >75-to-110 GHz band. The OMT operates with reduced ohmic losses at cryogenic temperatures.", "title": "" }, { "docid": "51030b1a05af38096a6ba72660f8bdf2", "text": "As a new type of e-commerce, social commerce is an emerging marketing form in which business is conducted via social networking platforms. It is playing an increasingly important role in influencing consumers’ purchase intentions. Social commerce uses friendships on social networking platforms, such as Facebook and Twitter, as the vehicle for social sharing about products or sellers to induce interest in a product, thereby increasing the purchase intention. In this paper, we develop and validate a conceptual model of how social factors, such as social support, seller uncertainty, and product uncertainty, influence onsumer purchasing intentions ocial support eller uncertainty roduct uncertainty hird-party infomediaries users’ purchasing behaviors in social commerce. This study aims to provide an understanding of the relationship between user behavior and social factors on social networking platforms. Using the largest social networking website in China, renren.com, this study finds that social support, seller uncertainty, and product uncertainty affect user behaviors. The results further show that social factors can significantly enhance users’ purchase intentions in social shopping. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9f635d570b827d68e057afcaadca791c", "text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.", "title": "" }, { "docid": "0f853c6ccf6ce4cf025050135662f725", "text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.", "title": "" }, { "docid": "b1a08b10ea79a250a62030a2987b67a6", "text": "Most text mining tasks, including clustering and topic detection, are based on statistical methods that treat text as bags of words. Semantics in the text is largely ignored in the mining process, and mining results often have low interpretability. One particular challenge faced by such approaches lies in short text understanding, as short texts lack enough content from which statistical conclusions can be drawn easily. In this paper, we improve text understanding by using a probabilistic knowledgebase that is as rich as our mental world in terms of the concepts (of worldly facts) it contains. We then develop a Bayesian inference mechanism to conceptualize words and short text. We conducted comprehensive experiments on conceptualizing textual terms, and clustering short pieces of text such as Twitter messages. Compared to purely statistical methods such as latent semantic topic modeling or methods that use existing knowledgebases (e.g., WordNet, Freebase and Wikipedia), our approach brings significant improvements in short text understanding as reflected by the clustering accuracy.", "title": "" }, { "docid": "472f1b7f3ebf1d8af950d9d348cafc98", "text": "We analyze convergence of GANs through the lens of online learning and game theory, to understand what makes it hard to achieve consistent stable training in practice. We identify that the underlying game here can be ill-posed and poorly conditioned, and propose a simple regularization scheme based on local perturbations of the input data to address these issues. Currently, the methods that improve stability either impose additional computational costs or require the usage of specific architectures/modeling objectives. Further, we show that WGAN-GP, which is the state-of-the-art stable training procedure, is similar to LS-GAN, does not follow from KR-duality and can be too restrictive in general. In contrast, our proposed algorithm is fast, simple to implement and achieves competitive performance in a stable fashion across a variety of architectures and objective functions with minimal hyperparameter tuning. We show significant improvements over WGAN-GP across these conditions.", "title": "" }, { "docid": "c632d3bfb27987e74cc69865627388bf", "text": "Previous studies and surgeon interviews have shown that most surgeons prefer quality standard de nition (SD)TV 2D scopes to rst generation 3D endoscopes. The use of a telesurgical system has eased many of the design constraints on traditional endoscopes, enabling the design of a high quality SDTV 3D endoscope and an HDTV endoscopic system with outstanding resolution. The purpose of this study was to examine surgeon performance and preference given the choice between these. The study involved two perceptual tasks and four visual-motor tasks using a telesurgical system using the 2D HDTV endoscope and the SDTV endoscope in both 2D and 3D mode. The use of a telesurgical system enabled recording of all the subjects motions for later analysis. Contrary to experience with early 3D scopes and SDTV 2D scopes, this study showed that despite the superior resolution of the HDTV system surgeons performed better with and preferred the SDTV 3D scope.", "title": "" }, { "docid": "4054713a00a9a2af6eb65f56433a943e", "text": "The question why deep learning algorithms perform so well in practice has attracted increasing research interest. However, most of well-established approaches, such as hypothesis capacity, robustness or sparseness, have not provided complete explanations, due to the high complexity of the deep learning algorithms and their inherent randomness. In this work, we introduce a new approach – ensemble robustness – towards characterizing the generalization performance of generic deep learning algorithms. Ensemble robustness concerns robustness of the population of the hypotheses that may be output by a learning algorithm. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbation is bounded in average, or equivalently, the performance variance of the algorithm is small. Quantifying ensemble robustness of various deep learning algorithms may be difficult analytically. However, extensive simulations for seven common deep learning algorithms for different network architectures provide supporting evidence for our claims. Furthermore, our work explains the good performance of several published deep learning algorithms.", "title": "" }, { "docid": "d06dc916942498014f9d00498c1d1d1f", "text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦", "title": "" }, { "docid": "988c161ceae388f5dbcdcc575a9fa465", "text": "This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.", "title": "" }, { "docid": "2efb71ffb35bd05c7a124ffe8ad8e684", "text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.", "title": "" }, { "docid": "a89c53f4fbe47e7a5e49193f0786cd6d", "text": "Although hundreds of studies have documented the association between family poverty and children's health, achievement, and behavior, few measure the effects of the timing, depth, and duration of poverty on children, and many fail to adjust for other family characteristics (for example, female headship, mother's age, and schooling) that may account for much of the observed correlation between poverty and child outcomes. This article focuses on a recent set of studies that explore the relationship between poverty and child outcomes in depth. By and large, this research supports the conclusion that family income has selective but, in some instances, quite substantial effects on child and adolescent well-being. Family income appears to be more strongly related to children's ability and achievement than to their emotional outcomes. Children who live in extreme poverty or who live below the poverty line for multiple years appear, all other things being equal, to suffer the worst outcomes. The timing of poverty also seems to be important for certain child outcomes. Children who experience poverty during their preschool and early school years have lower rates of school completion than children and adolescents who experience poverty only in later years. Although more research is needed on the significance of the timing of poverty on child outcomes, findings to date suggest that interventions during early childhood may be most important in reducing poverty's impact on children.", "title": "" }, { "docid": "87d15c47894210ad306948f32122a2c4", "text": "We design and implement MobileInsight, a software tool that collects, analyzes and exploits runtime network information from operational cellular networks. MobileInsight runs on commercial off-the-shelf phones without extra hardware or additional support from operators. It exposes protocol messages on both control plane and (below IP) data plane from the 3G/4G chipset. It provides in-device protocol analysis and operation logic inference. It further offers a simple API, through which developers and researchers obtain access to low-level network information for their mobile applications. We have built three showcases to illustrate how MobileInsight is applied to cellular network research.", "title": "" }, { "docid": "3716c221969ca93dac889820498d8dd4", "text": "Affective Loop Experiences What Are They? p. 1 Fine Processing p. 13 Mass Interpersonal Persuasion: An Early View of a New Phenomenon p. 23 Social Network Systems Online Persuasion in Facebook and Mixi: A Cross-Cultural Comparison p. 35 Website Credibility, Active Trust and Behavioural Intent p. 47 Network Awareness, Social Context and Persuasion p. 58 Knowledge Management Persuasion in Knowledge-Based Recommendation p. 71 Persuasive Technology Design A Rhetorical Approach p. 83 Benevolence and Effectiveness: Persuasive Technology's Spillover Effects in Retail Settings p. 94", "title": "" }, { "docid": "13ae30bc5bcb0714fe752fbe9c7e5de8", "text": "The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.", "title": "" }, { "docid": "f489708f15f3e5cdd15f669fb9979488", "text": "Humans learn to play video games significantly faster than state-of-the-art reinforcement learning (RL) algorithms. Inspired by this, we introduce strategic object oriented reinforcement learning (SOORL) to learn simple dynamics model through automatic model selection and perform efficient planning with strategic exploration. We compare different exploration strategies in a model-based setting in which exact planning is impossible. Additionally, we test our approach on perhaps the hardest Atari game Pitfall! and achieve significantly improved exploration and performance over prior methods.", "title": "" }, { "docid": "54d223a2a00cbda71ddf3f1b29f1ebed", "text": "Much of the data of scientific interest, particularly when independence of data is not assumed, can be represented in the form of information networks where data nodes are joined together to form edges corresponding to some kind of associations or relationships. Such information networks abound, like protein interactions in biology, web page hyperlink connections in information retrieval on the Web, cellphone call graphs in telecommunication, co-authorships in bibliometrics, crime event connections in criminology, etc. All these networks, also known as social networks, share a common property, the formation of connected groups of information nodes, called community structures. These groups are densely connected nodes with sparse connections outside the group. Finding these communities is an important task for the discovery of underlying structures in social networks, and has recently attracted much attention in data mining research. In this paper, we present Top Leaders, a new community mining approach that, simply put, regards a community as a set of followers congregating around a potential leader. Our algorithm starts by identifying promising leaders in a given network then iteratively assembles followers to their closest leaders to form communities, and subsequently finds new leaders in each group around which to gather followers again until convergence. Our intuitions are based on proven observations in social networks and the results are very promising. Experimental results on benchmark networks verify the feasibility and effectiveness of our new community mining approach.", "title": "" }, { "docid": "e8d102a7b00f81cefc4b1db043a041f8", "text": "Microelectrode measurements can be used to investigate both the intracellular pools of ions and membrane transport processes of single living cells. Microelectrodes can report these processes in the surface layers of root and leaf cells of intact plants. By careful manipulation of the plant, a minimum of disruption is produced and therefore the information obtained from these measurements most probably represents the 'in vivo' situation. Microelectrodes can be used to assay for the activity of particular transport systems in the plasma membrane of cells. Compartmental concentrations of inorganic metabolite ions have been measured by several different methods and the results obtained for the cytosol are compared. Ion-selective microelectrodes have been used to measure the activities of ions in the apoplast, cytosol and vacuole of single cells. New sensors for these microelectrodes are being produced which offer lower detection limits and the opportunity to measure other previously unmeasured ions. Measurements can be used to determine the intracellular steady-state activities or report the response of cells to environmental changes.", "title": "" } ]
scidocsrr
9b26ccdaafcfd71b7bad0623378094f7
Pendulum-balanced autonomous unicycle: Conceptual design and dynamics model
[ { "docid": "730d5e6577936ef3b513d0a7f4fa3641", "text": "In this research a computer simulation for implementing attitude controller of wheeled inverted pendulum is carried out. The wheeled inverted pendulum is a kind of an inverted pendulum that has two equivalent points. In order to keep the naturally unstable equivalent point, it should be controlling the wheels persistently. Dynamic equations of the wheeled inverted pendulum are derived with considering tilted road as one of various road conditions. A linear quadratic regulator is adopted for the attitude controller since it is easy to obtain full state variables from the sensors for that control scheme and based on controllable condition of the destination as well. Various computer simulation shows that the LQR controller is doing well not only flat road but also tilted road.", "title": "" }, { "docid": "54120754dc82632e6642cbd08401d2dc", "text": "In this paper we study the dynamic modeling of a unicycle robot composed of a wheel, a frame and a disk. The unicycle can reach longitudinal stability by appropriate control to the wheel and lateral stability by adjusting appropriate torque imposed by the disk. The dynamic modeling of the unicycle robot is derived by Euler-Lagrange method. The stability and controllability of the system are analyzed according to the mathematic model. Independent simulation using MATLAB and ODE methods are then proposed respectively. Through the simulation, we confirm the validity of the two obtained models of the unicycle robot system, and provide two experimental platforms for the designing of the balance controller.", "title": "" } ]
[ { "docid": "448d70d9f5f8e5fcb8d04d355a02c8f9", "text": "Structural health monitoring (SHM) using wireless sensor networks (WSNs) has gained research interest due to its ability to reduce the costs associated with the installation and maintenance of SHM systems. SHM systems have been used to monitor critical infrastructure such as bridges, high-rise buildings, and stadiums and has the potential to improve structure lifespan and improve public safety. The high data collection rate of WSNs for SHM pose unique network design challenges. This paper presents a comprehensive survey of SHM using WSNs outlining the algorithms used in damage detection and localization, outlining network design challenges, and future research directions. Solutions to network design problems such as scalability, time synchronization, sensor placement, and data processing are compared and discussed. This survey also provides an overview of testbeds and real-world deployments of WSNs for SH.", "title": "" }, { "docid": "52c7469ba9164280a9de841537e530d7", "text": "Monitoring the “physics” of control systems to detect attacks is a growing area of research. In its basic form a security monitor creates time-series models of sensor readings for an industrial control system and identifies anomalies in these measurements in order to identify potentially false control commands or false sensor readings. In this paper, we review previous work based on a unified taxonomy that allows us to identify limitations, unexplored challenges, and new solutions. In particular, we propose a new adversary model and a way to compare previous work with a new evaluation metric based on the trade-off between false alarms and the negative impact of undetected attacks. We also show the advantages and disadvantages of three experimental scenarios to test the performance of attacks and defenses: real-world network data captured from a large-scale operational facility, a fully-functional testbed that can be used operationally for water treatment, and a simulation of frequency control in the power grid.", "title": "" }, { "docid": "28c5fada2aab828af16ee5d7bffb4093", "text": "Based on the notion of accumulators, we propose a new cryptog raphic scheme called universal accumulators. This scheme enables one to commit to a set of values using a short accumulator and to efficiently com pute a membership witness of any value that has been accumulated. Unlike tradi tional accumulators, this scheme also enables one to efficiently compute a nonmemb ership witness of any value that has not been accumulated. We give a construc tion for universal accumulators and prove its security based on the strong RSA a ssumption. We further present a construction for dynamic universal accumula tors; this construction allows one to dynamically add and delete inputs with constan t computational cost. Our construction directly builds upon Camenisch and L ysyanskaya’s dynamic accumulator scheme. Universal accumulators can be se en as an extension to dynamic accumulators with support of nonmembership witn ess. We also give an efficient zero-knowledge proof protocol for proving that a committed value is not in the accumulator. Our dynamic universal accumulator c onstruction enables efficient membership revocation in an anonymous fashion.", "title": "" }, { "docid": "148d0709c58111c2f703f68d348c09af", "text": "There has been tremendous growth in the use of mobile devices over the last few years. This growth has fueled the development of millions of software applications for these mobile devices often called as 'apps'. Current estimates indicate that there are hundreds of thousands of mobile app developers. As a result, in recent years, there has been an increasing amount of software engineering research conducted on mobile apps to help such mobile app developers. In this paper, we discuss current and future research trends within the framework of the various stages in the software development life-cycle: requirements (including non-functional), design and development, testing, and maintenance. While there are several non-functional requirements, we focus on the topics of energy and security in our paper, since mobile apps are not necessarily built by large companies that can afford to get experts for solving these two topics. For the same reason we also discuss the monetizing aspects of a mobile app at the end of the paper. For each topic of interest, we first present the recent advances done in these stages and then we present the challenges present in current work, followed by the future opportunities and the risks present in pursuing such research.", "title": "" }, { "docid": "f0cabaa5dedadd65313af78c42a2df35", "text": "In this paper, a quadrifilar spiral antenna (QSA) with an integrated module for UHF radio frequency identification (RFID) reader is presented. The proposed QSA consists of four spiral antennas with short stubs and a microstrip feed network. Also, the shielded module is integrated on the center of the ground inside the proposed QSA. In order to match the proposed QSA with the integrated module, we adopt a short stub connected from each spiral antenna to ground. Experimental result shows that the QSA of size 80 × 80 × 11.2 mm3 with the integrated module (40 × 40 × 3 mm3) has a peak gain of 3.5 dBic, an axial ratio under 2.5 dB and a 3-dB beamwidth of about 130o.", "title": "" }, { "docid": "0ccfe04a4426e07dcbd0260d9af3a578", "text": "We present an efficient algorithm to perform approximate offsetting operations on geometric models using GPUs. Our approach approximates the boundary of an object with point samples and computes the offset by merging the balls centered at these points. The underlying approach uses Layered Depth Images (LDI) to organize the samples into structured points and performs parallel computations using multiple cores. We use spatial hashing to accelerate intersection queries and balance the workload among various cores. Furthermore, the problem of offsetting with a large distance is decomposed into successive offsetting using smaller distances. We derive bounds on the accuracy of offset computation as a function of the sampling rate of LDI and offset distance. In practice, our GPU-based algorithm can accurately compute offsets of models represented using hundreds of thousands of points in a few seconds on GeForce GTX 580 GPU. We observe more than 100 times speedup over prior serial CPU-based approximate offset computation algorithms.", "title": "" }, { "docid": "e72f8ad61a7927fee8b0a32152b0aa4b", "text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.", "title": "" }, { "docid": "d3682d2a9e11f80a51c53659c9b6623d", "text": "Despite the considerable clinical impact of congenital human cytomegalovirus (HCMV) infection, the mechanisms of maternal–fetal transmission and the resultant placental and fetal damage are largely unknown. Here, we discuss animal models for the evaluation of CMV vaccines and virus-induced pathology and particularly explore surrogate human models for HCMV transmission and pathogenesis in the maternal–fetal interface. Studies in floating and anchoring placental villi and more recently, ex vivo modeling of HCMV infection in integral human decidual tissues, provide unique insights into patterns of viral tropism, spread, and injury, defining the outcome of congenital infection, and the effect of potential antiviral interventions.", "title": "" }, { "docid": "5fba6770fef320c6e7dee2c848a0a503", "text": "Person re-identification (Re-ID) aims at recognizing the same person from images taken across different cameras. To address this task, one typically requires a large amount labeled data for training an effective Re-ID model, which might not be practical for real-world applications. To alleviate this limitation, we choose to exploit a sufficient amount of pre-existing labeled data from a different (auxiliary) dataset. By jointly considering such an auxiliary dataset and the dataset of interest (but without label information), our proposed adaptation and re-identification network (ARN) performs unsupervised domain adaptation, which leverages information across datasets and derives domain-invariant features for Re-ID purposes. In our experiments, we verify that our network performs favorably against state-of-the-art unsupervised Re-ID approaches, and even outperforms a number of baseline Re-ID methods which require fully supervised data for training.", "title": "" }, { "docid": "9dceccb7b171927a5cba5a16fd9d76c6", "text": "This paper involved developing two (Type I and Type II) equal-split Wilkinson power dividers (WPDs). The Type I divider can use two short uniform-impedance transmission lines, one resistor, one capacitor, and two quarter-wavelength (λ/4) transformers in its circuit. Compared with the conventional equal-split WPD, the proposed Type I divider can relax the two λ/4 transformers and the output ports layout restrictions of the conventional WPD. To eliminate the number of impedance transformers, the proposed Type II divider requires only one impedance transformer attaining the optimal matching design and a compact size. A compact four-way equal-split WPD based on the proposed Type I and Type II dividers was also developed, facilitating a simple layout, and reducing the circuit size. Regarding the divider, to obtain favorable selectivity and isolation performance levels, two Butterworth filter transformers were integrated in the proposed Type I divider to perform filter response and power split functions. Finally, a single Butterworth filter transformer was integrated in the proposed Type II divider to demonstrate a compact filtering WPD.", "title": "" }, { "docid": "39e30b2303342235780c7fff68cdc0aa", "text": "The impact factor is only one of three standardized measures created by the Institute of Scientific Information (ISI), which can be used to measure the way a journal receives citations to its articles over time. The build-up of citations tends to follow a curve like that of Figure 1. Citations to articles published in a given year rise sharply to a peak between two and six years after publication. From this peak citations decline exponentially. The citation curve of any journal can be described by the relative size of the curve (in terms of area under the line), the extent to which the peak of the curve is close to the origin and the rate of decline of the curve. These characteristics form the basis of the ISI indicators impact factor, immediacy index and cited half-life . The impact factor is a measure of the relative size of the citation curve in years 2 and 3. It is calculated by dividing the number of current citations a journal receives to articles published in the two previous years by the number of articles published in those same years. So, for example, the 1999 impact factor is the citations in 1999 to articles published in 1997 and 1998 divided by the number of articles published in 1997 and 1998. The number that results can be thought of as the average number of citations the average article receives per annum in the two years after the publication year. The immediacy index gives a measure of the skewness of the curve, that is, the extent to which the peak of the curve lies near the origin of the graph. It is calculated by dividing the citations a journal receives in the current year by the number of articles it publishes in that year, i.e., the 1999 immediacy index is the average number of citations in 1999 to articles published in 1999. The number that results can be thought of as the initial gradient of the citation curve, a measure of how quickly items in that journal get cited upon publication. The cited half-life is a measure of the rate of decline of the citation curve. It is the number of years that the number of current citations takes to decline to 50% of its initial value; the cited half-life is 6 years in the example given in (Figure 1). It is a measure of how long articles in a journal continue to be cited after publication.", "title": "" }, { "docid": "200ee6830f8b8f54ecb1c808c6712337", "text": "DC power distribution systems for building application are gaining interest both in academic and industrial world, due to potential benefits in terms of energy efficiency and capital savings. These benefits are more evident were the end-use loads are natively DC (e.g., computers, solid-state lighting or variable speed drives for electric motors), like in data centers and commercial buildings, but also in houses. When considering the presence of onsite renewable generation, e.g. PV or micro-wind generators, storage systems and electric vehicles, DC-based building microgrids can bring additional benefits, allowing direct coupling of DC loads and DC Distributed energy Resources (DERs). A number of demonstrating installations have been built and operated around the world, and an effort is being made both in USA and Europe to study different aspects involved in the implementation of a DC distribution system (e.g. safety, protection, control) and to develop standards for DC building application. This paper discusses on the planning of an experimental DC microgrid with power hardware in the loop features at the University of Naples Federico II, Dept. of Electr. Engineering and Inf. Technologies. The microgrid consists of a 3-wire DC bus, with positive, negative and neutral poles, with a voltage range of +/-0÷400 V. The system integrates a number of DERs, like PV, Wind and Fuel Cell generators, battery and super capacitor based storage systems, EV chargers, standard loads and smart loads. It will include also a power-hardware-in-the-loop platform with the aim to enable the real time emulation of single components or parts of the microgrid, or of systems and sub-systems interacting with the microgrid, thus realizing a virtual extension of the scale of the system. Technical features and specifications of the power amplifier to be used as power interface of the PHIL platform will be discussed in detail.", "title": "" }, { "docid": "92137a6f5fa3c5059bdb08db2fb5c39d", "text": "Motivated by our ongoing efforts in the development of Refraction 2, a puzzle game targeting mathematics education, we realized that the quality of a puzzle is critically sensitive to the presence of alternative solutions with undesirable properties. Where, in our game, we seek a way to automatically synthesize puzzles that can only be solved if the player demonstrates specific concepts, concern for the possibility of undesirable play touches other interactive design domains. To frame this problem (and our solution to it) in a general context, we formalize the problem of generating solvable puzzles that admit no undesirable solutions as an NPcomplete search problem. By making two design-oriented extensions to answer set programming (a technology that has been recently applied to constrained game content generation problems) we offer a general way to declaratively pose and automatically solve the high-complexity problems coming from this formulation. Applying this technique to Refraction, we demonstrate a qualitative leap in the kind of puzzles we can reliably generate. This work opens up new possibilities for quality-focused content generators that guarantee properties over their entire combinatorial space of play.", "title": "" }, { "docid": "584d2858178e4e33855103a71d7fdce4", "text": "This paper presents 5G mm-wave phased-array antenna for 3D-hybrid beamforming. This uses MFC to steer beam for the elevation, and uses butler matrix network for the azimuth. In case of butler matrix network, this, using 180° ring hybrid coupler switch network, is proposed to get additional beam pattern and improved SRR in comparison with conventional structure. Also, it can be selected 15 of the azimuth beam pattern. When using the chip of proposed structure, it is possible to get variable kind of beam-forming over 1000. In addition, it is suitable 5G system or a satellite communication system that requires a beamforming.", "title": "" }, { "docid": "292d7fbc9352dc1d2a84364d66dda308", "text": "The ultrastructure of somatic cells present in gonadal tubules in male oyster Crassostrea gigas was investigated. These cells, named Intragonadal Somatic Cells (ISCs) have a great role in the organization of the germinal epithelium in the gonad. Immunological detection of α-tubulin tyrosine illustrates their association in columns from the basis to the lumen of the tubule, stabilized by numerous adhesive junctions. This somatic intragonadal organization delimited some different groups of germ cells along the tubule walls. In early stages of gonad development, numerous phagolysosomes were observed in the cytoplasm of ISCs indicating that these cells have in this species an essential role in the removal of waste sperm in the tubules. Variations of lipids droplets content in the cytoplasm of ISCs were also noticed along the spermatogenesis course. ISCs also present some mitochondria with tubullo-lamellar cristae.", "title": "" }, { "docid": "5c31ed81a9c8d6463ce93890e38ad7b5", "text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.", "title": "" }, { "docid": "428069c804c035e028e9047d6c1f70f7", "text": "We present a co-designed scheduling framework and platform architecture that together support compositional scheduling of real-time systems. The architecture is built on the Xen virtualization platform, and relies on compositional scheduling theory that uses periodic resource models as component interfaces. We implement resource models as periodic servers and consider enhancements to periodic server design that significantly improve response times of tasks and resource utilization in the system while preserving theoretical schedulability results. We present an extensive evaluation of our implementation using workloads from an avionics case study as well as synthetic ones.", "title": "" }, { "docid": "ec9f793761ebd5199c6a2cc8c8215ac4", "text": "A dual-frequency compact printed antenna for Wi-Fi (IEEE 802.11x at 2.45 and 5.5 GHz) applications is presented. The design is successfully optimized using a finite-difference time-domain (FDTD)-algorithm-based procedure. Some prototypes have been fabricated and measured, displaying a very good performance.", "title": "" }, { "docid": "d62ab0d9f243aebea62d782ec4163c69", "text": "Recommender Systems (RS) serve online customers in identifying those items from a variety of choices that best match their needs and preferences. In this context explanations summarize the reasons why a specific item is proposed and strongly increase the users' trust in the system's results. In this paper we propose a framework for generating knowledgeable explanations that exploits domain knowledge to transparently argue why a recommended item matches the user's preferences. Furthermore, results of an online experiment on a real-world platform show that users' perception of the usability of a recommender system is positively influenced by knowledgeable explanations and that consequently users' experience in interacting with the system, their intention to use it repeatedly as well as their commitment to recommend it to others are increased.", "title": "" }, { "docid": "cd9632f63fc5e3acf0ebb1039048f671", "text": "The authors completed an 8-week practice placement at Thrive’s garden project in Battersea Park, London, as part of their occupational therapy degree programme. Thrive is a UK charity using social and therapeutic horticulture (STH) to enable disabled people to make positive changes to their own lives (Thrive 2008). STH is an emerging therapeutic movement, using horticulture-related activities to promote the health and wellbeing of disabled and vulnerable people (Sempik et al 2005, Fieldhouse and Sempik 2007). Within Battersea Park, Thrive has a main garden with available indoor facilities and two satellite gardens. All these gardens are publicly accessible. Thrive Battersea’s service users include people with learning disabilities, mental health challenges and physical disabilities. Thrive’s group facilitators (referred to as therapists) lead regular gardening groups, aiming to enable individual performance within the group and being mindful of health conditions and circumstances. The groups have three types of participant: Thrive’s therapists, service users (known as gardeners) and volunteers. The volunteers help Thrive’s therapists and gardeners to perform STH activities. The gardening groups comprise participants from various age groups and abilities. Thrive Battersea provides ongoing contact between the gardeners, volunteers and therapists. Integrating service users and non-service users is a method of tackling negative attitudes to disability and also promoting social inclusion (Sayce 2000). Thrive Battersea is an example of a ‘role-emerging’ practice placement, which is based outside either local authorities or the National Health Service (NHS) and does not have an on-site occupational therapist (College of Occupational Therapists 2006). The connection of occupational therapy theory to practice is essential on any placement (Alsop 2006). The roleemerging nature of this placement placed additional reflective onus on the authors to identify the links between theory and practice. The authors observed how Thrive’s gardeners connected to the spaces they worked and to the people they worked with. A sense of individual Gardening and belonging: reflections on how social and therapeutic horticulture may facilitate health, wellbeing and inclusion", "title": "" } ]
scidocsrr
2e55b9e280c82ad6d994acd2bbf7b280
Wheat grass juice reduces transfusion requirement in patients with thalassemia major: a pilot study.
[ { "docid": "242746fd37b45c83d8f4d8a03c1079d3", "text": "BACKGROUND\nThe use of wheat grass (Triticum aestivum) juice for treatment of various gastrointestinal and other conditions had been suggested by its proponents for more than 30 years, but was never clinically assessed in a controlled trial. A preliminary unpublished pilot study suggested efficacy of wheat grass juice in the treatment of ulcerative colitis (UC).\n\n\nMETHODS\nA randomized, double-blind, placebo-controlled study. One gastroenterology unit in a tertiary hospital and three study coordinating centers in three major cities in Israel. Twenty-three patients diagnosed clinically and sigmoidoscopically with active distal UC were randomly allocated to receive either 100 cc of wheat grass juice, or a matching placebo, daily for 1 month. Efficacy of treatment was assessed by a 4-fold disease activity index that included rectal bleeding and number of bowel movements as determined from patient diary records, a sigmoidoscopic evaluation, and global assessment by a physician.\n\n\nRESULTS\nTwenty-one patients completed the study, and full information was available on 19 of them. Treatment with wheat grass juice was associated with significant reductions in the overall disease activity index (P=0.031) and in the severity of rectal bleeding (P = 0.025). No serious side effects were found. Fresh extract of wheat grass demonstrated a prominent tracing in cyclic voltammetry methodology, presumably corresponding to four groups of compounds that exhibit anti-oxidative properties.\n\n\nCONCLUSION\nWheat grass juice appeared effective and safe as a single or adjuvant treatment of active distal UC.", "title": "" } ]
[ { "docid": "e870d5f8daac0d13bdcffcaec4ba04c1", "text": "In this paper the design, fabrication and test of X-band and 2-18 GHz wideband high power SPDT MMIC switches in microstrip GaN technology are presented. Such switches have demonstrated state-of-the-art performances. In particular the X-band switch exhibits 1 dB insertion loss, better than 37 dB isolation and a power handling capability at 9 GHz of better than 39 dBm at 1 dB insertion loss compression point; the wideband switch has an insertion loss lower than 2.2 dB, better than 25 dB isolation and a power handling capability of better than 38 dBm in the entire bandwidth.", "title": "" }, { "docid": "7ccbb730f1ce8eca687875c632520545", "text": "Increasing cost of the fertilizers with lesser nutrient use efficiency necessitates alternate means to fertilizers. Soil is a storehouse of nutrients and energy for living organisms under the soil-plant-microorganism system. These rhizospheric microorganisms are crucial components of sustainable agricultural ecosystems. They are involved in sustaining soil as well as crop productivity under organic matter decomposition, nutrient transformations, and biological nutrient cycling. The rhizospheric microorganisms regulate the nutrient flow in the soil through assimilating nutrients, producing biomass, and converting organically bound forms of nutrients. Soil microorganisms play a significant role in a number of chemical transformations of soils and thus, influence the availability of macroand micronutrients. Use of plant growth-promoting microorganisms (PGPMs) helps in increasing yields in addition to conventional plant protection. The most important PGPMs are Azospirillum, Azotobacter, Bacillus subtilis, B. mucilaginosus, B. edaphicus, B. circulans, Paenibacillus spp., Acidithiobacillus ferrooxidans, Pseudomonas, Burkholderia, potassium, phosphorous, zinc-solubilizing V.S. Meena (*) Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India Indian Council of Agricultural Research – Vivekananda Institute of Hill Agriculture, Almora 263601, Uttarakhand, India e-mail: vijayssac.bhu@gmail.com; vijay.meena@icar.gov.in I. Bahadur • B.R. Maurya Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India A. Kumar Department of Botany, MMV, Banaras Hindu University, Varanasi 221005, India R.K. Meena Department of Plant Sciences, School of Life Sciences, University of Hyderabad, Hyderabad 500046, TG, India S.K. Meena Division of Soil Science and Agricultural Chemistry, Indian Agriculture Research Institute, New Delhi 110012, India J.P. Verma Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi 22100, Uttar Pradesh, India # Springer India 2016 V.S. Meena et al. (eds.), Potassium Solubilizing Microorganisms for Sustainable Agriculture, DOI 10.1007/978-81-322-2776-2_1 1 microorganisms, or SMART microbes; these are eco-friendly and environmentally safe. The rhizosphere is the important area of soil influenced by plant roots. It is composed of huge microbial populations that are somehow different from the rest of the soil population, generally denominated as the “rhizosphere effect.” The rhizosphere is the small region of soil that is immediately near to the root surface and also affected by root exudates.", "title": "" }, { "docid": "17ba29c670e744d6e4f9e93ceb109410", "text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.", "title": "" }, { "docid": "94638dc3bac02be0317599cbc02b5cdc", "text": "Discussion thread classification plays an important role for Massive Open Online Courses (MOOCs) forum. Most existing methods in this filed focus on extracting text features (e.g. key words) from the content of discussions using NLP methods. However, diversity of languages used in MOOC forums results in poor expansibility of these methods. To tackle this problem, in this paper, we artificially design 23 language independent features related to structure, popularity and underlying social network of thread. Furthermore, a hybrid model which combine Gradient Boosting Decision Tree (GBDT) with Linear Regression (LR) (GBDT + LR) is employed to reduce the traditional cost of feature learning for discussion threads classification manually. Experiments are carried out on the datasets contributed by Coursera with nearly 100, 000 discussion threads of 60 courses taught in 4 different languages. Results demonstrate that our method can significantly improve the performance of discussion threads classification. It is worth drawing that the average AUC of our model is 0.832, outperforming baseline by 15%.", "title": "" }, { "docid": "4aa6103dca92cf8663139baf93f78a80", "text": "We propose a unified approach for summarization based on the analysis of video structures and video highlights. Our approach emphasizes both the content balance and perceptual quality of a summary. Normalized cut algorithm is employed to globally and optimally partition a video into clusters. A motion attention model based on human perception is employed to compute the perceptual quality of shots and clusters. The clusters, together with the computed attention values, form a temporal graph similar to Markov chain that inherently describes the evolution and perceptual importance of video clusters. In our application, the flow of a temporal graph is utilized to group similar clusters into scenes, while the attention values are used as guidelines to select appropriate sub-shots in scenes for summarization.", "title": "" }, { "docid": "6793ec9b73add6514f842c2899b4ecc8", "text": "In recent decades, the ad hoc network for vehicles has been a core network technology to provide comfort and security to drivers in vehicle environments. However, emerging applications and services require major changes in underlying network models and computing that require new road network planning. Meanwhile, blockchain widely known as one of the disruptive technologies has emerged in recent years, is experiencing rapid development and has the potential to revolutionize intelligent transport systems. Blockchain can be used to build an intelligent, secure, distributed and autonomous transport system. It allows better utilization of the infrastructure and resources of intelligent transport systems, particularly effective for crowdsourcing technology. In this paper, we proposes a vehicle network architecture based on blockchain in the smart city (Block-VN). Block-VN is a reliable and secure architecture that operates in a distributed way to build the new distributed transport management system. We are considering a new network system of vehicles, Block-VN, above them. In addition, we examine how the network of vehicles evolves with paradigms focused on networking and vehicular information. Finally, we discuss service scenarios and design principles for Block-VN.", "title": "" }, { "docid": "38a0f56e760b0e7a2979c90a8fbcca68", "text": "The Rubik’s Cube is perhaps the world’s most famous and iconic puzzle, well-known to have a rich underlying mathematical structure (group theory). In this paper, we show that the Rubik’s Cube also has a rich underlying algorithmic structure. Specifically, we show that the n×n×n Rubik’s Cube, as well as the n×n×1 variant, has a “God’s Number” (diameter of the configuration space) of Θ(n/ logn). The upper bound comes from effectively parallelizing standard Θ(n) solution algorithms, while the lower bound follows from a counting argument. The upper bound gives an asymptotically optimal algorithm for solving a general Rubik’s Cube in the worst case. Given a specific starting state, we show how to find the shortest solution in an n×O(1)×O(1) Rubik’s Cube. Finally, we show that finding this optimal solution becomes NPhard in an n×n×1 Rubik’s Cube when the positions and colors of some cubies are ignored (not used in determining whether the cube is solved).", "title": "" }, { "docid": "8cd52cdc44c18214c471716745e3c00f", "text": "The design of electric vehicles require a complete paradigm shift in terms of embedded systems architectures and software design techniques that are followed within the conventional automotive systems domain. It is increasingly being realized that the evolutionary approach of replacing the engine of a car by an electric engine will not be able to address issues like acceptable vehicle range, battery lifetime performance, battery management techniques, costs and weight, which are the core issues for the success of electric vehicles. While battery technology has crucial importance in the domain of electric vehicles, how these batteries are used and managed pose new problems in the area of embedded systems architecture and software for electric vehicles. At the same time, the communication and computation design challenges in electric vehicles also have to be addressed appropriately. This paper discusses some of these research challenges.", "title": "" }, { "docid": "c983e94a5334353ec0e2dabb0e95d92a", "text": "Digital family calendars have the potential to help families coordinate, yet they must be designed to easily fit within existing routines or they will simply not be used. To understand the critical factors affecting digital family calendar design, we extended LINC, an inkable family calendar to include ubiquitous access, and then conducted a month-long field study with four families. Adoption and use of LINC during the study demonstrated that LINC successfully supported the families' existing calendaring routines without disrupting existing successful social practices. Families also valued the additional features enabled by LINC. For example, several primary schedulers felt that ubiquitous access positively increased involvement by additional family members in the calendaring routine. The field trials also revealed some unexpected findings, including the importance of mobility---both within and outside the home---for the Tablet PC running LINC.", "title": "" }, { "docid": "fef66948f4f647f88cc3921366f45e49", "text": "Acoustic correlates of stress [duration, fundamental frequency (Fo), and intensity] were investigated in a language (Thai) in which both duration and Fo are employed to signal lexical contrasts. Stimuli consisted of 25 pairs of segmentally/tonally identical, syntactically ambiguous sentences. The first member of each sentence pair contained a two-syllable noun-verb sequence exhibiting a strong-strong (--) stress pattern, the second member a two-syllable noun compound exhibiting a weak-strong (--) stress pattern. Measures were taken of five prosodic dimensions of the rhyme portion of the target syllable: duration, average Fo, Fo standard deviation, average intensity, and intensity standard deviation. Results of linear regression indicated that duration is the predominant cue in signaling the distinction between stressed and unstressed syllables in Thai. Discriminant analysis showed a stress classification accuracy rate of over 99%. Findings are discussed in relation to the varying roles that Fo, intensity, and duration have in different languages given their phonological structure.", "title": "" }, { "docid": "f153ee3853f40018ed0ae8b289b1efcf", "text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.", "title": "" }, { "docid": "eaf30f31b332869bc45ff1288c41da71", "text": "Search Engines: Information Retrieval In Practice is writen by Bruce Croft in English language. Release on 2009-02-16, this book has 552 page count that consist of helpful information with easy reading experience. The book was publish by Addison-Wesley, it is one of best subjects book genre that gave you everything love about reading. You can find Search Engines: Information Retrieval In Practice book with ISBN 0136072240.", "title": "" }, { "docid": "1dbe74730ec8b780d1391827491b7b45", "text": "Collaborative filtering (CF) and contentbased filtering (CBF) have widely been used in information filtering applications, both approaches having their individual strengths and weaknesses. This paper proposes a novel probabilistic framework to unify CF and CBF, named collaborative ensemble learning. Based on content based probabilistic models for each user’s preferences (the CBF idea), it combines a society of users’ preferences to predict an active user’s preferences (the CF idea). While retaining an intuitive explanation, the combination scheme can be interpreted as a hierarchical Bayesian approach in which a common prior distribution is learned from related experiments. It does not require a global training stage and thus can incrementally incorporate new data. We report results based on two data sets, the Reuters-21578 text data set and a data base of user opionions on art images. For both data sets, collaborative ensemble achieved excellent performance in terms of recommendation accuracy. In addition to recommendation engines, collaborative ensemble learning is applicable to problems typically solved via classical hierarchical Bayes, like multisensor fusion and multitask learning.", "title": "" }, { "docid": "9aee53ac010545e963f4e4697bf04ec2", "text": "For financial institutions, the ability to predict or forecast business failures is crucial, as incorrect decisions can have direct financial consequences. Bankruptcy prediction and credit scoring are the two major research problems in the accounting and finance domain. In the literature, a number of models have been developed to predict whether borrowers are in danger of bankruptcy and whether they should be considered a good or bad credit risk. Since the 1990s, machine-learning techniques, such as neural networks and decision trees, have been studied extensively as tools for bankruptcy prediction and credit score modeling. This paper reviews 130 related journal papers from the period between 1995 and 2010, focusing on the development of state-of-the-art machine-learning techniques, including hybrid and ensemble classifiers. Related studies are compared in terms of classifier design, datasets, baselines, and other experimental factors. This paper presents the current achievements and limitations associated with the development of bankruptcy-prediction and credit-scoring models employing machine learning. We also provide suggestions for future research.", "title": "" }, { "docid": "5f4235a8f9095afe6697c9fdb00e0a43", "text": "Typically, firms decide whether or not to develop a new product based on their resources, capabilities and the return on investment that the product is estimated to generate. We propose that firms adopt a broader heuristic for making new product development choices. Our heuristic approach requires moving beyond traditional finance-based thinking, and suggests that firms concentrate on technological trajectories by combining technology roadmapping, information technology (IT) and supply chain management to make more sustainable new product development decisions. Using the proposed holistic heuristic methods, versus relying on traditional finance-based decision-making tools (e.g., emphasizing net present value or internal rate of return projections), enables firms to plan beyond the short-term and immediate set of technologies at hand. Our proposed heuristic approach enables firms to forecast technologies and markets, and hence, new product priorities in the longer term. Investments in new products should, as a result, generate returns over a longer period than traditionally expected, giving firms more sustainable investments. New products are costly and need to have a 0040-1625/$ – see front matter D 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0040-1625(03)00064-7 * Corresponding author. Tel.: +1-814-863-7133. E-mail addresses: ijpetrick@psu.edu (I.J. Petrick), aie1@psu.edu (A.E. Echols). 1 Tel.: +1-814-863-0642. I.J. Petrick, A.E. Echols / Technological Forecasting & Social Change 71 (2004) 81–100 82 durable presence in the market. Transaction costs and resources will be saved, as firms make new product development decisions less frequently. D 2003 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "7f662aa8c1bab3add687755dd37f52a1", "text": "Although researchers have discovered that Minnie G. had nearly 50 years of progression-free survival, the absence of her original surgical records have precluded anything more than speculation as to the etiology of her symptoms or the details of her admission. Following IRB approval, and through the courtesy of the Alan Mason Chesney Archives, the microfilm surgical records from the Johns Hopkins Hospital, 1896–1912 were reviewed. Using the surgical number provided in Cushing’s publications, the record for Minnie G. was recovered for further review. Cushing’s diagnosis relied largely on history and physical findings. Minnie G. presented with stigmata associated with classic Cushings Syndrome: abdominal stria, supraclavicular fat pads, and a rounded face. However, she also presented with unusual physical findings: exophthalmos, and irregular pigmentation of the extremities, face, and eyelids. A note in the chart indicates Minnie G. spoke very little English, implying the history-taking was fraught with opportunities for error. Although there remains no definitive etiology for Minnie G.’s symptoms, this report contributes additional information about her diagnosis and treatment.", "title": "" }, { "docid": "b3cf36dc0536d3518f1bef31c290328f", "text": "BACKGROUND\nHospital-acquired pressure ulcers are a serious patient safety concern, associated with poor patient outcomes and high healthcare costs. They are also viewed as an indicator of nursing care quality.\n\n\nOBJECTIVE\nTo evaluate the effectiveness of a pressure ulcer prevention care bundle in preventing hospital-acquired pressure ulcers among at risk patients.\n\n\nDESIGN\nPragmatic cluster randomised trial.\n\n\nSETTING\nEight tertiary referral hospitals with >200 beds each in three Australian states.\n\n\nPARTICIPANTS\n1600 patients (200/hospital) were recruited. Patients were eligible if they were: ≥18 years old; at risk of pressure ulcer because of limited mobility; expected to stay in hospital ≥48h and able to read English.\n\n\nMETHODS\nHospitals (clusters) were stratified in two groups by recent pressure ulcer rates and randomised within strata to either a pressure ulcer prevention care bundle or standard care. The care bundle was theoretically and empirically based on patient participation and clinical practice guidelines. It was multi-component, with three messages for patients' participation in pressure ulcer prevention care: keep moving; look after your skin; and eat a healthy diet. Training aids for patients included a DVD, brochure and poster. Nurses in intervention hospitals were trained in partnering with patients in their pressure ulcer prevention care. The statistician, recruiters, and outcome assessors were blinded to group allocation and interventionists blinded to the study hypotheses, tested at both the cluster and patient level. The primary outcome, incidence of hospital-acquired pressure ulcers, which applied to both the cluster and individual participant level, was measured by daily skin inspection.\n\n\nRESULTS\nFour clusters were randomised to each group and 799 patients per group analysed. The intraclass correlation coefficient was 0.035. After adjusting for clustering and pre-specified covariates (age, pressure ulcer present at baseline, body mass index, reason for admission, residence and number of comorbidities on admission), the hazard ratio for new pressure ulcers developed (pressure ulcer prevention care bundle relative to standard care) was 0.58 (95% CI: 0.25, 1.33; p=0.198). No adverse events or harms were reported.\n\n\nCONCLUSIONS\nAlthough the pressure ulcer prevention care bundle was associated with a large reduction in the hazard of ulceration, there was a high degree of uncertainty around this estimate and the difference was not statistically significant. Possible explanations for this non-significant finding include that the pressure ulcer prevention care bundle was effective but the sample size too small to detect this.", "title": "" }, { "docid": "36bdd8eefd2f72d06a4cefe68127ce04", "text": "Dantzig, Fulkerson, and Johnson (1954) introduced the cutting-plane method as a means of attacking the traveling salesman problem; this method has been applied to broad classes of problems in combinatorial optimization and integer programming. In this paper we discuss an implementation of Dantzig et al.'s method that is suitable for TSP instances having 1,000,000 or more cities. Our aim is to use the study of the TSP as a step towards understanding the applicability and limits of the general cutting-plane method in large-scale applications. 1. The Cutting-Plane Method The symmetric traveling salesman problem, or TSP for short, is this: given a nite number of \\cities\" along with the cost of travel between each pair of them, nd the cheapest way of visiting all of the cities and returning to your starting point. The travel costs are symmetric in the sense that traveling from city X to city Y costs just as much as traveling from Y to X; the \\way of visiting all of the cities\" is simply the order in which the cities are visited. The prominence of the TSP in the combinatorial optimization literature is to a large extent due to its success as an engine-of-discovery for techniques that have application far beyond the narrow con nes of the TSP itself. Foremost among the TSP-inspired discoveries is Dantzig, Fulkerson, and Johnson's (1954) cutting-plane method, which can be used to attack any problem minimize cx subject to x 2 S (1) such that S is a nite subset of some R and such that an eÆcient algorithm to recognize points of S is available. This method is iterative; each of its D. Applegate: Algorithms and Optimization Department, AT&T Labs { Research, Florham Park, NJ 07932, USA R. Bixby: Computational and Applied Mathematics, Rice University, Houston, TX 77005, USA V. Chv atal: Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA W. Cook: Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA ? Supported by ONR Grant N00014-03-1-0040 2 David Applegate et al. iterations begins with a linear programming (LP) relaxation of (1), meaning a problem minimize cx subject to Ax b (2) such that the polyhedron P de ned as fx : Ax bg contains S and is bounded. Since P is bounded, we can nd an optimal solution x of (2) such that x is an extreme point of P . If x belongs to S, then it constitutes an optimal solution of (1); otherwise some linear inequality is satis ed by all the points in S and violated by x ; such an inequality is called a cutting plane or simply a cut . In the latter case, we nd a nonempty family of cuts, add them to the system Ax b, and use the resulting tighter relaxation of (1) in the next iteration of the procedure. Dantzig et al. demonstrated the power of their cutting-plane method by solving a 49-city instance of the TSP, which was an impressive size in 1954. The TSP is a special case of (1) with m = n(n 1)=2, where n is the number of the cities, and with S consisting of the set of the incidence vectors of all the Hamiltonian cycles through the set V of the n cities; in this context, Hamiltonian cycles are commonly called tours. In Dantzig et al.'s attack, the initial P consists of all vectors x, with components subscripted by edges of the complete graph on V , that satisfy 0 xe 1 for all edges e (3) and P (xe : v 2 e) = 2 for all cities v. (4) (Throughout this paper, we treat the edges of a graph as two-point subsets of its vertex-set: v 2 emeans that vertex v is an endpoint of edge e; e\\Q 6= ; means that edge e has an endpoint in set Q; e Q 6= ;means that edge e has an endpoint outside setQ; and so on.) All but two of their cuts have the form P (xe : e\\Q 6= ;; e Q 6= ;) 2 such that Q is a nonempty proper subset of V . Dantzig et al. called such inequalities \\loop constraints\"; nowadays, they are commonly referred to as subtour elimination inequalities ; we are going to call them simply subtour inequalities . (As for the two exceptional cuts, Dantzig et al. give ad hoc combinatorial arguments to show that these inequalities are satis ed by incidence vectors of all tours through the 49 cities and, in a footnote, they say \\We are indebted to I. Glicksberg of Rand for pointing out relations of this kind to us.\") The original TSP algorithm of Dantzig et al. has been extended and improved by many researchers, led by the fundamental contributions of M. Grotschel and M. Padberg; surveys of this work can be found in Grotschel and Padberg (1985), Padberg and Gr otschel (1985), J unger et al. (1995, 1997), and Naddef (2002). The cutting-plane method is the core of nearly all successful approaches proposed to date for obtaining provably optimal solutions to the TSP, and it remains the only known technique for solving instances having more than several hundred cities. Beyond the TSP, the cutting-plane method has been applied to a host of NP-hard problems (see J unger et al. (1995)), and is an important component of modern Title Suppressed Due to Excessive Length 3 mixed-integer-programming codes (see Marchand et al. (1999) and Bixby et al. (2000, 2003)). In this paper we discuss an implementation of the Dantzig et al. algorithm designed for TSP instances having 1,000,000 or more cities; very large TSP instances arise is applications such as genome-sequencing (Agarwala et al. (2000)), but the primary aim of our work is to use the TSP as a means of studying issues that arise in the general application of cuttingplane algorithms for large-scale problems. Instances of this size are well beyond the reach of current (exact) solution techniques, but even in this case the cutting-plane method can be used to provide strong lower bounds on the optimal tour lengths. For example, we use cutting planes to show that the best known tour for a speci c 1,000,000-city randomly generated Euclidean instance is no more than 0.05% from optimality. This instance was created by David S. Johnson in 1994, studied by Johnson and McGeoch (1997, 2002) and included in the DIMACS (2001) challenge test set under the name \\E1M.0\". Its cities are points with integer coordinates drawn uniformly from the 1,000,000 by 1,000,000 grid; the cost of an edge is the Euclidean distance between the corresponding points, rounded to the nearest integer. The paper is organized as follows. In Section 2 we present separation algorithms for subtour inequalities and in Section 3 we present simple methods for separating a further class of TSP inequalities known as \\blossoms\"; in these two sections we consider only methods that can be easily applied to large problem instances. In Section 4 we discuss methods for adjusting cutting planes to respond to changes in the optimal LP solution x ; again, we consider only procedures that perform well on large instances. In Section 5 we discuss a linear-time implementation of the \\local cut\" technique for generating TSP inequalities by mapping the space of variables to a space of very low dimension. The core LP problem that needs to be solved in each iteration of the cutting-plane algorithm is discussed in Section 6. Data structures for storing cutting planes are treated in Section 7 and methods for handling the n(n 1)=2 edges are covered in Section 8. In Section 9 we report on computational results for a variety of test instances. The techniques developed in this paper are incorporated into the Concorde computer code of Applegate et al. (2003); the Concorde code is freely available for use in research studies. 2. Subtour Inequalities A separation algorithm for a class C of linear inequalities is an algorithm that, given any x , returns either an inequality in C that is violated by x or a failure message. Separation algorithms that return a failure message only if all inequalities in C are satis ed by x are called exact ; separation algorithms that may return a failure message even when some inequality in C is violated by x are called heuristic. 4 David Applegate et al. We present below several fast heuristics for subtour separation, and discuss brie y the Padberg and Rinaldi (1990a) exact subtour separation procedure. 2.1. The x(S; T ) notation Let V be a nite set of cities, let E be the edge-set of the complete graph on V , and let w be a vector indexed by E. Given disjoint subsets S; T of V , we write w(S; T ) to mean X (we : e 2 E; e \\ S 6= ;; e \\ T 6= ;): This notation is adopted from Ford and Fulkerson (1962); using it, the subtour inequality corresponding to S can be written as", "title": "" }, { "docid": "f4e7e0ea60d9697e8fea434990409c16", "text": "Prognostics is very useful to predict the degradation trend of machinery and to provide an alarm before a fault reaches critical levels. This paper proposes an ARIMA approach to predict the future machine status with accuracy improvement by an improved forecasting strategy and an automatic prediction algorithm. Improved forecasting strategy increases the times of model building and creates datasets for modeling dynamically to avoid using the previous values predicted to forecast and generate the predictions only based on the true observations. Automatic prediction algorithm can satisfy the requirement of real-time prognostics by automates the whole process of ARIMA modeling and forecasting based on the Box-Jenkins's methodology and the improved forecasting strategy. The feasibility and effectiveness of the approach proposed is demonstrated through the prediction of the vibration characteristic in rotating machinery. The experimental results show that the approach can be applied successfully and effectively for prognostics of machine health condition.", "title": "" } ]
scidocsrr
b8d6292b10b684f88c40f1d142d71b08
On cognitive small cells in two-tier heterogeneous networks
[ { "docid": "804139352206af823bc8bae12789c416", "text": "In a two-tier heterogeneous network (HetNet) where femto access points (FAPs) with lower transmission power coexist with macro base stations (BSs) with higher transmission power, the FAPs may suffer significant performance degradation due to inter-tier interference. Introducing cognition into the FAPs through the spectrum sensing (or carrier sensing) capability helps them avoiding severe interference from the macro BSs and enhance their performance. In this paper, we use stochastic geometry to model and analyze performance of HetNets composed of macro BSs and cognitive FAPs in a multichannel environment. The proposed model explicitly accounts for the spatial distribution of the macro BSs, FAPs, and users in a Rayleigh fading environment. We quantify the performance gain in outage probability obtained by introducing cognition into the femto-tier, provide design guidelines, and show the existence of an optimal spectrum sensing threshold for the cognitive FAPs, which depends on the HetNet parameters. We also show that looking into the overall performance of the HetNets is quite misleading in the scenarios where the majority of users are served by the macro BSs. Therefore, the performance of femto-tier needs to be explicitly accounted for and optimized.", "title": "" } ]
[ { "docid": "bd19395492dfbecd58f5cfd56b0d00a7", "text": "The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.", "title": "" }, { "docid": "c071d5a7ff1dbfd775e9ffdee1b07662", "text": "OBJECTIVES\nComplete root coverage is the primary objective to be accomplished when treating gingival recessions in patients with aesthetic demands. Furthermore, in order to satisfy patient demands fully, root coverage should be accomplished by soft tissue, the thickness and colour of which should not be distinguishable from those of adjacent soft tissue. The aim of the present split-mouth study was to compare the treatment outcome of two surgical approaches of the bilaminar procedure in terms of (i) root coverage and (ii) aesthetic appearance of the surgically treated sites.\n\n\nMATERIAL AND METHODS\nFifteen young systemically and periodontally healthy subjects with two recession-type defects of similar depth affecting contralateral teeth in the aesthetic zone of the maxilla were enrolled in the study. All recessions fall into Miller class I or II. Randomization for test and control treatment was performed by coin toss immediately prior to surgery. All defects were treated with a bilaminar surgical technique: differences between test and control sites resided in the size, thickness and positioning of the connective tissue graft. The clinical re-evaluation was made 1 year after surgery.\n\n\nRESULTS\nThe two bilaminar techniques resulted in a high percentage of root coverage (97.3% in the test and 94.7% in the control group) and complete root coverage (gingival margin at the cemento-enamel junction (CEJ)) (86.7% in the test and 80% in the control teeth), with no statistically significant difference between them. Conversely, better aesthetic outcome and post-operative course were indicated by the patients for test compared to control sites.\n\n\nCONCLUSIONS\nThe proposed modification of the bilaminar technique improved the aesthetic outcome. The reduced size and minimal thickness of connective tissue graft, together with its positioning apical to the CEJ, facilitated graft coverage by means of the coronally advanced flap.", "title": "" }, { "docid": "70ef01e33f48a52455141c3fa9130b01", "text": "The Physical Appearance Comparison Scale (PACS; Thompson, Heinberg, & Tantleff, 1991) was revised to assess appearance comparisons relevant to women and men in a wide variety of contexts. The revised scale (Physical Appearance Comparison Scale-Revised, PACS-R) was administered to 1176 college females. In Study 1, exploratory factor analysis and parallel analysis using one half of the sample suggested a single factor structure for the PACS-R. Study 2 utilized the remaining half of the sample to conduct confirmatory factor analysis, item analysis, and to examine the convergent validity of the scale. These analyses resulted in an 11-item measure that demonstrated excellent internal consistency and convergent validity with measures of body satisfaction, eating pathology, sociocultural influences on appearance, and self-esteem. Regression analyses demonstrated the utility of the PACS-R in predicting body satisfaction and eating pathology. Overall, results indicate that the PACS-R is a reliable and valid tool for assessing appearance comparison tendencies in women.", "title": "" }, { "docid": "ba203abd0bd55fc9d06fe979a604d741", "text": "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on largescale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.", "title": "" }, { "docid": "2edababb2f442f6ae93604170ef0a44b", "text": "The aim of the research, is to examine the relationship between adolescents' five-factor personality features by use of Social Media. As for sample, there are 548 girl and 441 boy students and they are between the ages of 11-18. Adolescents’ data participating in the study, are determined by Big Five Factor personality traits Scale. Prepared data on the use of social media called \"Personal Information Form\" has been obtained by researcher. In the analysis of data, understanding of social media use times whether it differs according to big five personality traits, According to the social media using time, there was no significant difference between the agreeableness and openness subscales. On the other hand, there is a significant differences between conscientiousness, extraversion and neuroticism. In association with five personality traits of social media purpose, it was found that there are significant differences with different personality traits for each purpose.", "title": "" }, { "docid": "357ff730c3d0f8faabe1fa14d4b04463", "text": "In this paper, we propose a novel two-stage video captioning framework composed of 1) a multi-channel video encoder and 2) a sentence-generating language decoder. Both of the encoder and decoder are based on recurrent neural networks with long-short-term-memory cells. Our system can take videos of arbitrary lengths as input. Compared with the previous sequence-to-sequence video captioning frameworks, the proposed model is able to handle multiple channels of video representations and jointly learn how to combine them. The proposed model is evaluated on two large-scale movie datasets (MPII Corpus and Montreal Video Description) and one YouTube dataset (Microsoft Video Description Corpus) and achieves the state-of-the-art performances. Furthermore, we extend the proposed model towards automatic American Sign Language recognition. To evaluate the performance of our model on this novel application, a new dataset for ASL video description is collected based on YouTube videos. Results on this dataset indicate that the proposed framework on ASL recognition is promising and will significantly benefit the independent communication between ASL users and", "title": "" }, { "docid": "bc4fa6a77bf0ea02456947696dc6dca3", "text": "We propose a constraint programming approach for the optimization of inventory routing in the liquefied natural gas industry. We present two constraint programming models that rely on a disjunctive scheduling representation of the problem. We also propose an iterative search heuristic to generate good feasible solutions for these models. Computational results on a set of largescale test instances demonstrate that our approach can find better solutions than existing approaches based on mixed integer programming, while being 4 to 10 times faster on average.", "title": "" }, { "docid": "ce3f7214e8ad4a29efa8c04fc8fa3a4b", "text": "Recognition of social signals, from human facial expressions or prosody of speech, is a popular research topic in human-robot interaction studies. There is also a long line of research in the spoken dialogue community that investigates user satisfaction in relation to dialogue characteristics. However, very little research relates a combination of multimodal social signals and language features detected during spoken face-to-face human-robot interaction to the resulting user perception of a robot. In this paper we show how different emotional facial expressions of human users, in combination with prosodic characteristics of human speech and features of human-robot dialogue, correlate with users’ impressions of the robot after a conversation. We find that happiness in the user’s recognised facial expression strongly correlates with likeability of a robot, while dialogue-related features (such as number of human turns or number of sentences per robot utterance) correlate with perceiving a robot as intelligent. In addition, we show that facial expression, emotional features, and prosody are better predictors of human ratings related to perceived robot likeability and anthropomorphism, while linguistic and non-linguistic features more often predict perceived robot intelligence and interpretability. As such, these characteristics may in future be used as an online reward signal for in-situ Reinforcement Learningbased adaptive human-robot dialogue systems. Figure 1: Left: a live view of experimental setup showing a participant interacting with Pepper. Right: a diagram of experimental setup showing the participant (green) and the robot (white) positioned face to face. The scene was recorded by cameras (triangles C) from the robot’s perspective focusing on the face of the participant and from the side, showing the whole scene. The experimenter (red) was seated behind a divider.", "title": "" }, { "docid": "b23db18b30963ae3b7000e75306d4c69", "text": "State-of-the-art semantic segmentation approaches increase the receptive field of their models by using either a downsampling path composed of poolings/strided convolutions or successive dilated convolutions. However, it is not clear which operation leads to best results. In this paper, we systematically study the differences introduced by distinct receptive field enlargement methods and their impact on the performance of a novel architecture, called Fully Convolutional DenseResNet (FC-DRN). FC-DRN has a densely connected backbone composed of residual networks. Following standard image segmentation architectures, receptive field enlargement operations that change the representation level are interleaved among residual networks. This allows the model to exploit the benefits of both residual and dense connectivity patterns, namely: gradient flow, iterative refinement of representations, multi-scale feature combination and deep supervision. In order to highlight the potential of our model, we test it on the challenging CamVid urban scene understanding benchmark and make the following observations: 1) downsampling operations outperform dilations when the model is trained from scratch, 2) dilations are useful during the finetuning step of the model, 3) coarser representations require less refinement steps, and 4) ResNets (by model construction) are good regularizers, since they can reduce the model capacity when needed. Finally, we compare our architecture to alternative methods and report state-of-the-art result on the Camvid dataset, with at least twice fewer parameters.", "title": "" }, { "docid": "3e06d3b5ca50bf4fcd9d354a149dd40c", "text": "In this paper, the classification via sprepresentation and multitask learning is presented for target recognition in SAR image. To capture the characteristics of SAR image, a multidimensional generalization of the analytic signal, namely the monogenic signal, is employed. The original signal can be then orthogonally decomposed into three components: 1) local amplitude; 2) local phase; and 3) local orientation. Since the components represent the different kinds of information, it is beneficial by jointly considering them in a unifying framework. However, these components are infeasible to be directly utilized due to the high dimension and redundancy. To solve the problem, an intuitive idea is to define an augmented feature vector by concatenating the components. This strategy usually produces some information loss. To cover the shortage, this paper considers three components into different learning tasks, in which some common information can be shared. Specifically, the component-specific feature descriptor for each monogenic component is produced first. Inspired by the recent success of multitask learning, the resulting features are then fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks. The inference is reached in terms of the total reconstruction error accumulated from all tasks. The novelty of this paper includes 1) the development of three component-specific feature descriptors; 2) the introduction of multitask learning into sparse representation model; 3) the numerical implementation of proposed method; and 4) extensive comparative experimental studies on MSTAR SAR dataset, including target recognition under standard operating conditions, as well as extended operating conditions, and the capability of outliers rejection.", "title": "" }, { "docid": "12f8414a2cadd222c31805de8bb3ed87", "text": "In this paper we explore functions of bounded variation. We discuss properties of functions of bounded variation and consider three related topics. The related topics are absolute continuity, arc length, and the Riemann-Stieltjes integral.", "title": "" }, { "docid": "f043acf163d787c4a53924515b509aba", "text": "A two-wheeled self-balancing robot is a special type of wheeled mobile robot, its balance problem is a hot research topic due to its unstable state for controlling. In this paper, human transporter model has been established. Kinematic and dynamic models are constructed and two control methods: Proportional-integral-derivative (PID) and Linear-quadratic regulator (LQR) are implemented to test the system model in which controls of two subsystems: self-balance (preventing system from falling down when it moves forward or backward) and yaw rotation (steering angle regulation when it turns left or right) are considered. PID is used to control both two subsystems, LQR is used to control self-balancing subsystem only. By using simulation in Matlab, two methods are compared and discussed. The theoretical investigations for controlling the dynamic behavior are meaningful for design and fabrication. Finally, the result shows that LQR has a better performance than PID for self-balancing subsystem control.", "title": "" }, { "docid": "ec4dcce4f53e38909be438beeb62b1df", "text": " A very efficient protocol for plant regeneration from two commercial Humulus lupulus L. (hop) cultivars, Brewers Gold and Nugget has been established, and the morphogenetic potential of explants cultured on Adams modified medium supplemented with several concentrations of cytokinins and auxins studied. Zeatin at 4.56 μm produced direct caulogenesis and caulogenic calli in both cultivars. Subculture of these calli on Adams modified medium supplemented with benzylaminopurine (4.4 μm) and indolebutyric acid (0.49 μm) promoted shoot regeneration which gradually increased up to the third subculture. Regeneration rates of 60 and 29% were achieved for Nugget and Brewers Gold, respectively. By selection of callus lines, it has been possible to maintain caulogenic potential for 14 months. Regenerated plants were successfully transferred to field conditions.", "title": "" }, { "docid": "05cf044dcb3621a0190403a7961ecb00", "text": "This paper describes a real-time beat tracking system that recognizes a hierarchical beat structure comprising the quarter-note, half-note, and measure levels in real-world audio signals sampled from popular-music compact discs. Most previous beat-tracking systems dealt with MIDI signals and had difficulty in processing, in real time, audio signals containing sounds of various instruments and in tracking beats above the quarter-note level. The system described here can process music with drums and music without drums and can recognize the hierarchical beat structure by using three kinds of musical knowledge: of onset times, of chord changes, and of drum patterns. This paper also describes several applications of beat tracking, such as beat-driven real-time computer graphics and lighting control.", "title": "" }, { "docid": "572867885a16afc0af6a8ed92632a2a7", "text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.", "title": "" }, { "docid": "dd86d2530dfa9a44b84d85b9db18e200", "text": "In order to extract entities of a fine-grained category from semi-structured data in web pages, existing information extraction systems rely on seed examples or redundancy across multiple web pages. In this paper, we consider a new zero-shot learning task of extracting entities specified by a natural language query (in place of seeds) given only a single web page. Our approach defines a log-linear model over latent extraction predicates, which select lists of entities from the web page. The main challenge is to define features on widely varying candidate entity lists. We tackle this by abstracting list elements and using aggregate statistics to define features. Finally, we created a new dataset of diverse queries and web pages, and show that our system achieves significantly better accuracy than a natural baseline.", "title": "" }, { "docid": "b5fd22854e75a29507cde380999705a2", "text": "This study presents a high-efficiency-isolated single-input multiple-output bidirectional (HISMB) converter for a power storage system. According to the power management, the proposed HISMB converter can operate at a step-up state (energy release) and a step-down state (energy storage). At the step-up state, it can boost the voltage of a low-voltage input power source to a high-voltage-side dc bus and middle-voltage terminals. When the high-voltage-side dc bus has excess energy, one can reversely transmit the energy. The high-voltage dc bus can take as the main power, and middle-voltage output terminals can supply powers for individual middle-voltage dc loads or to charge auxiliary power sources (e.g., battery modules). In this study, a coupled-inductor-based HISMB converter accomplishes the bidirectional power control with the properties of voltage clamping and soft switching, and the corresponding device specifications are adequately designed. As a result, the energy of the leakage inductor of the coupled inductor can be recycled and released to the high-voltage-side dc bus and auxiliary power sources, and the voltage stresses on power switches can be greatly reduced. Moreover, the switching losses can be significantly decreased because of all power switches with zero-voltage-switching features. Therefore, the objectives of high-efficiency power conversion, electric isolation, bidirectional energy transmission, and various output voltage with different levels can be obtained. The effectiveness of the proposed HISMB converter is verified by experimental results of a kW-level prototype in practical applications.", "title": "" }, { "docid": "c9380c87222af7c9f4116cc02a68060c", "text": "Biatriospora (Ascomycota: Pleosporales, Biatriosporaceae) is a genus with unexplored diversity and poorly known ecology. This work expands the Biatriospora taxonomic and ecological concept by describing four new species found as endophytes of woody plants in temperate forests of the Czech Republic and in tropical regions, including Amazonia. Ribosomal DNA sequences, together with protein-coding genes (RPB2, EF1α), growth rates and morphology, were used for species delimitation and description. Ecological data gathered by this and previous studies and the inclusion of sequences deposited in public databases show that Biatriospora contains species that are endophytes of angiosperms in temperate and tropical regions as well as species that live in marine or estuarine environments. These findings show that this genus is more diverse and has more host associations than has been described previously. The possible adaptations enabling the broad ecological range of these fungi are discussed. Due to the importance that Biatriospora species have in bioprospecting natural products, we suggest that the species introduced here warrant further investigation.", "title": "" }, { "docid": "7d7ea6239106f614f892701e527122e2", "text": "The purpose of this study was to investigate the effects of aromatherapy on the anxiety, sleep, and blood pressure (BP) of percutaneous coronary intervention (PCI) patients in an intensive care unit (ICU). Fifty-six patients with PCI in ICU were evenly allocated to either the aromatherapy or conventional nursing care. Aromatherapy essential oils were blended with lavender, roman chamomile, and neroli with a 6 : 2 : 0.5 ratio. Participants received 10 times treatment before PCI, and the same essential oils were inhaled another 10 times after PCI. Outcome measures patients' state anxiety, sleeping quality, and BP. An aromatherapy group showed significantly low anxiety (t = 5.99, P < .001) and improving sleep quality (t = -3.65, P = .001) compared with conventional nursing intervention. The systolic BP of both groups did not show a significant difference by time or in a group-by-time interaction; however, a significant difference was observed between groups (F = 4.63, P = .036). The diastolic BP did not show any significant difference by time or by a group-by-time interaction; however, a significant difference was observed between groups (F = 6.93, P = .011). In conclusion, the aromatherapy effectively reduced the anxiety levels and increased the sleep quality of PCI patients admitted to the ICU. Aromatherapy may be used as an independent nursing intervention for reducing the anxiety levels and improving the sleep quality of PCI patients.", "title": "" } ]
scidocsrr
f6047a528d25c4d52d310ffcc641c731
An approach for detection and family classification of malware based on behavioral analysis
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "f1ce50e0b787c1d10af44252b3a7e656", "text": "This paper proposes a scalable approach for distinguishing malicious files from clean files by investigating the behavioural features using logs of various API calls. We also propose, as an alternative to the traditional method of manually identifying malware files, an automated classification system using runtime features of malware files. For both projects, we use an automated tool running in a virtual environment to extract API call features from executables and apply pattern recognition algorithms and statistical methods to differentiate between files. Our experimental results, based on a dataset of 1368 malware and 456 cleanware files, provide an accuracy of over 97% in distinguishing malware from cleanware. Our techniques provide a similar accuracy for classifying malware into families. In both cases, our results outperform comparable previously published techniques.", "title": "" }, { "docid": "f5d769be1305755fe0753d1e22cbf5c9", "text": "The number of malware is increasing rapidly and a lot of malware use stealth techniques such as encryption to evade pattern matching detection by anti-virus software. To resolve the problem, behavior based detection method which focuses on malicious behaviors of malware have been researched. Although they can detect unknown and encrypted malware, they suffer a serious problem of false positives against benign programs. For example, creating files and executing them are common behaviors performed by malware, however, they are also likely performed by benign programs thus it causes false positives. In this paper, we propose a malware detection method based on evaluation of suspicious process behaviors on Windows OS. To avoid false positives, our proposal focuses on not only malware specific behaviors but also normal behavior that malware would usually not do. Moreover, we implement a prototype of our proposal to effectively analyze behaviors of programs. Our evaluation experiments using our malware and benign program datasets show that our malware detection rate is about 60% and it does not cause any false positives. Furthermore, we compare our proposal with completely behavior-based anti-virus software. Our results show that our proposal puts few burdens on users and reduces false positives.", "title": "" } ]
[ { "docid": "f2a677515866e995ff8e0e90561d7cbc", "text": "Pattern matching and data abstraction are important concepts in designing programs, but they do not fit well together. Pattern matching depends on making public a free data type representation, while data abstraction depends on hiding the representation. This paper proposes the views mechanism as a means of reconciling this conflict. A view allows any type to be viewed as a free data type, thus combining the clarity of pattern matching with the efficiency of data abstraction.", "title": "" }, { "docid": "d73af831462af9ea510fb9a00c152ab6", "text": "Cloud computing is a new paradigm for using ICT services— only when needed and for as long as needed, and paying only for service actually consumed. Benchmarking the increasingly many cloud services is crucial for market growth and perceived fairness, and for service design and tuning. In this work, we propose a generic architecture for benchmarking cloud services. Motivated by recent demand for data-intensive ICT services, and in particular by processing of large graphs, we adapt the generic architecture to Graphalytics, a benchmark for distributed and GPU-based graph analytics platforms. Graphalytics focuses on the dependence of performance on the input dataset, on the analytics algorithm, and on the provisioned infrastructure. The benchmark provides components for platform configuration, deployment, and monitoring, and has been tested for a variety of platforms. We also propose a new challenge for the process of benchmarking data-intensive services, namely the inclusion of the data-processing algorithm in the system under test; this increases significantly the relevance of benchmarking results, albeit, at the cost of increased benchmarking duration.", "title": "" }, { "docid": "efbaec32e42bdb9f12341d6be588a985", "text": "Bacterial quorum sensing (QS) is a density dependent communication system that regulates the expression of certain genes including production of virulence factors in many pathogens. Bioactive plant extract/compounds inhibiting QS regulated gene expression may be a potential candidate as antipathogenic drug. In this study anti-QS activity of peppermint (Mentha piperita) oil was first tested using the Chromobacterium violaceum CVO26 biosensor. Further, the findings of the present investigation revealed that peppermint oil (PMO) at sub-Minimum Inhibitory Concentrations (sub-MICs) strongly interfered with acyl homoserine lactone (AHL) regulated virulence factors and biofilm formation in Pseudomonas aeruginosa and Aeromonas hydrophila. The result of molecular docking analysis attributed the QS inhibitory activity exhibited by PMO to menthol. Assessment of ability of menthol to interfere with QS systems of various Gram-negative pathogens comprising diverse AHL molecules revealed that it reduced the AHL dependent production of violacein, virulence factors, and biofilm formation indicating broad-spectrum anti-QS activity. Using two Escherichia coli biosensors, MG4/pKDT17 and pEAL08-2, we also confirmed that menthol inhibited both the las and pqs QS systems. Further, findings of the in vivo studies with menthol on nematode model Caenorhabditis elegans showed significantly enhanced survival of the nematode. Our data identified menthol as a novel broad spectrum QS inhibitor.", "title": "" }, { "docid": "5cb970d7a207865ed0048fd20ce5fff2", "text": "Effective evaluation is necessary in order to ensure systems adequately meet the requirements and information processing needs of the users and scope of the system. Technology acceptance model is one of the most popular and effective models for evaluation. A number of studies have proposed evaluation frameworks to aid in evaluation work. The end users for evaluation the acceptance of new technology or system have a lack of knowledge to examine and evaluate some features in the new technology/system. This will give a fake evaluation results of the new technology acceptance. This paper proposes a novel evaluation model to evaluate user acceptance of software and system technology by modifying the dimensions of the Technology Acceptance Model (TAM) and added additional success dimension for expert users. The proposed model has been validated by an empirical study based on a questionnaire. The results indicated that the expert users have a strong significant influence to help in evaluation and pay attention to some features that end users have lack of knowledge to evaluate it.", "title": "" }, { "docid": "eabb50988aeb711995ff35833a47770d", "text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.", "title": "" }, { "docid": "7057a9c1cedafe1fca48b886afac20d3", "text": "In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels.", "title": "" }, { "docid": "e723f76f4c9b264cbf4361b72c7cbf10", "text": "With the constant growth in Information and Communication Technology (ICT) in the last 50 years or so, electronic communication has become part of the present day system of living. Equally, smileys or emoticons were innovated in 1982, and today the genre has attained a substantial patronage in various aspects of computer-mediated communication (CMC). Ever since written forms of electronic communication lack the face-to-face (F2F) situation attributes, emoticons are seen as socio-emotional suppliers to the CMC. This article reviews scholarly research in that field in order to compile variety of investigations on the application of emoticons in some facets of CMC, i.e. Facebook, Instant Messaging (IM), and Short Messaging Service (SMS). Key findings of the review show that emoticons do not just serve as paralanguage elements rather they are compared to word morphemes with distinctive significative functions. In other words, they are morpheme-like units and could be derivational, inflectional, or abbreviations but not unbound. The findings also indicate that emoticons could be conventionalized as well as being paralinguistic elements, therefore, they should be approached as contributory to conversation itself not mere compensatory to language.", "title": "" }, { "docid": "9809596697119fb50978470aaec837d6", "text": "Tuning of PID controller parameters is one of the usual tasks of the control engineers due to the wide applications of this class of controllers in industry. In this paper the Iterative Feedback Tuning (IFT) method is applied to tune the PID parameters. The main advantage of this method is that there is no need to the model of the system, so that is useful in many processes which there is no obvious model of the system. In many cases this feature can be so useful in tuning the controller parameters. The IFT is applied here to tune the PID parameters. Speed control of DC motor was employed to demonstrate the effectiveness of the method. The results is compared with other tuning methods and represented the good performance of the designed controller. As it is shown, the step response of the system controlled by PID tuned with IFT has more robustness and performs well.", "title": "" }, { "docid": "9b2291ef3e605d85b6d0dba326aa10ef", "text": "We propose a multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrate a three-fold performance improvement over comparable methods. Previous research has shown that partitioning an evolving population into age groups can greatly improve the ability to identify global optima and avoid converging to local optima. Here, we propose that treating age as an explicit optimization criterion can increase performance even further, with fewer algorithm implementation parameters. The proposed method evolves a population on the two-dimensional Pareto front comprising (a) how long the genotype has been in the population (age); and (b) its performance (fitness). We compare this approach with previous approaches on the Symbolic Regression problem, sweeping the problem difficulty over a range of solution complexities and number of variables. Our results indicate that the multi-objective approach identifies the exact target solution more often that the age-layered population and standard population methods. The multi-objective method also performs better on higher complexity problems and higher dimensional datasets -- finding global optima with less computational effort.", "title": "" }, { "docid": "379138e53ed204ff46b657185ff86368", "text": "Human pose-estimation in a multi-person image involves detection of various body parts and grouping them into individual person clusters. While the former task is challenging due to mutual occlusions, the combinatorial complexity of the latter task is very high. We propose a greedy part assignment algorithm that exploits the inherent structure of the human body to lower the complexity of the graphical model, compared to any of the prior published works. This is accomplished by (i) reducing the number of part-candidates using the estimated number of people in the image, (ii) doing a greedy sequential assignment of partclasses, following the kinematic chain from head to ankle (iii) doing a greedy assignment of parts in each part-class set, to person-clusters (iv) limiting the candidate person clusters to the most proximal clusters using human anthropometric data and (v) using only a specific subset of pre-assigned parts for establishing pairwise structural constraints. We show that, these steps sparsify the bodyparts relationship graph and reduces the algorithm's complexity to be linear in the number of candidates of any single part-class. We also propose a method for spawning person-clusters from any unassigned significant body part to make the algorithm robust to occlusions. We show that, our proposed part-assignment algorithm, despite using a sub-optimal pre-trained DNN model, achieves state of the art results on both MPII and WAF pose datasets, demonstrating the robustness of our approach.", "title": "" }, { "docid": "3a0275d7834a6fb1359bb7d3bef14e97", "text": "With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve quality of service (QoS) in IoT networks is becoming a challenging problem. Currently most interaction between the IoT devices and the supporting back-end servers is done through large scale cloud data centers. However, with the exponential growth of IoT devices and the amount of data they produce, communication between \"things\" and cloud will be costly, inefficient, and in some cases infeasible. Fog computing serves as solution for this as it provides computation, storage, and networking resource for IoT, closer to things and users. One of the promising advantages of fog is reducing service delay for end user applications, whereas cloud provides extensive computation and storage capacity with a higher latency. Thus it is necessary to understand the interplay between fog computing and cloud, and to evaluate the effect of fog computing on the IoT service delay and QoS. In this paper we will introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.", "title": "" }, { "docid": "ce13d49ba27d33db28fd5aaf991b2214", "text": "The performance of a standard model predictive controller (MPC) is directly related to its predictive model. If there are unmodeled periodic disturbances in the actual system, MPC will be difficult to suppress the disturbances, thus causing fluctuations of system output. To solve this problem, this paper proposes an improved MPC named predictive-integral-resonant control (PIRC). Compared with the standard MPC, the proposed PIRC could enhance the suppression ability for disturbances by embedding the internal model composing of the integral and resonant loop. Furthermore, this paper applies the proposed PIRC to PMSM drives, and proposes the PMSM control strategy based on the cascaded PIRC, which could suppress periodic disturbances caused by the dead time effects, current sampling errors, and so on. The experimental results show that the PIRC can suppress periodic disturbances in the drive system, thus ensuring good current and speed performance. Meanwhile, the PIRC could maintain the excellent dynamic performance as the standard MPC.", "title": "" }, { "docid": "e48da0cf3a09b0fd80f0c2c01427a931", "text": "Timely analysis of information in cybersecurity necessitates automated information extraction from unstructured text. Unfortunately, state-of-the-art extraction methods require training data, which is unavailable in the cyber-security domain. To avoid the arduous task of handlabeling data, we develop a very precise method to automatically label text from several data sources by leveraging article-specific structured data and provide public access to corpus annotated with cyber-security entities. We then prototype a maximum entropy model that processes this corpus of auto-labeled text to label new sentences and present results showing the Collins Perceptron outperforms the MLE with LBFGS and OWL-QN optimization for parameter fitting. The main contribution of this paper is an automated technique for creating a training corpus from text related to a database. As a multitude of domains can benefit from automated extraction of domain-specific concepts for which no labeled data is available, we hope our solution is widely applicable.", "title": "" }, { "docid": "3f418dd3a1374a7928e2428aefe4fe29", "text": "The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach for tackling this problem is commonly known as pruning and it consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the least-squares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "15fa73633d6ec7539afc91bb1f45098f", "text": "Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.", "title": "" }, { "docid": "45f120b05b3c48cd95d5dd55031987cb", "text": "n engl j med 359;6 www.nejm.org august 7, 2008 628 From the Department of Medicine (O.O.F., E.S.A.) and the Division of Infectious Diseases (P.A.M.), Johns Hopkins Bayview Medical Center, Johns Hopkins School of Medicine, Baltimore; the Division of Infectious Diseases (D.R.K.) and the Division of General Medicine (S.S.), University of Michigan Medical School, Ann Arbor; and the Department of Veterans Affairs Health Services Research and Development Center of Excellence, Ann Arbor, MI (S.S.). Address reprint requests to Dr. Antonarakis at the Johns Hopkins Bayview Medical Center, Department of Medicine, B-1 North, 4940 Eastern Ave., Baltimore, MD 21224, or at eantona1@ jhmi.edu.", "title": "" }, { "docid": "b1d00c44127956ab703204490de0acd7", "text": "The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.", "title": "" }, { "docid": "30799ad2796b9715fb70be87438edf64", "text": "This study investigated the impact of introducing the Klein-Bell ADL Scale into a rehabilitation medicine service. A pretest and a posttest questionnaire of rehabilitation team members and a pretest and a posttest audit of occupational therapy documentation were completed. Results of the questionnaire suggested that the ADL scale influenced rehabilitation team members' observations in the combined area of occupational therapy involvement in self-care, improvement in the identification of treatment goals and plans, and communication between team members. Results of the audit suggested that the thoroughness and quantification of occupational therapy documentation improved. The clinical implications of these findings recommend the use of the Klein-Bell ADL Scale in rehabilitation services for improving occupational therapy documentation and for enhancing rehabilitation team effectiveness.", "title": "" }, { "docid": "ca7deb4d72ceb8325861724722345a61", "text": "a r t i c l e i n f o Synthesizing prior research, this paper designs a relatively comprehensive and holistic characterization of business analytics – one that serves as a foundation on which researchers, practitioners, and educators can base their studies of business analytics. As such, it serves as an initial ontology for business analytics as a field of study. The foundation has three main parts dealing with the whence and whither of business analytics: identification of dimensions along which business analytics possibilities can be examined, derivation of a six-class taxonomy that covers business analytics perspectives in the literature, and design of an inclusive framework for the field of business analytics. In addition to unifying the literature, a major contribution of the designed framework is that it can stimulate thinking about the nature, roles, and future of business analytics initiatives. We show how this is done by deducing a host of unresolved issues for consideration by researchers, practitioners, and educators. We find that business analytics involves issues quite aside from data management, number crunching, technology use, systematic reasoning, and so forth. According to a study by Gartner, the technology category of \" analyt-ics and business intelligence \" is the top priority of chief information officers, and comprises a $12.2B market [1]. It is seen as a higher priority than such categories as mobile technology, cloud computing, and collaboration technology. Further, Gartner finds that the top technology priority of chief financial officers is analytics [2]. Similarly, in studies involving interviews with thousands of chief information officers, worldwide, IBM asked, \" which visionary plan do you have to increase competitiveness over the next 3 to 5 years? \" In both 2011 and 2009, 83% of respondents identify \" Business Intelligence and Analytics \" as their number-one approach for achieving greater competitiveness. Among all types of plans, this is the top percentage for both years. To put this in perspective, consider 2011 results, in which business intelligence and analytics exceeds such other competitiveness plans as mobility solutions (ranked 2nd at74%), cloud computing (ranked 4th at 60%), and social networking (ranked 8th at 55%) [3]. IDC reports that the business analytics software market grew by 13.8% during 2011 to $32B, and predicts it to be at $50.7B in revenue by 2016 [4,5]. It appears that a driver for this growth is the perception or realization that such investments yield value. Across a …", "title": "" }, { "docid": "b11c04a5aacac0d369c636b1fad47570", "text": "Draft of textbook chapter on neural machine translation. a comprehensive treatment of the topic, ranging from introduction to neural networks, computation graphs, description of the currently dominant attentional sequence-to-sequence model, recent refinements, alternative architectures and challenges. Written as chapter for the textbook Statistical Machine Translation. Used in the JHU Fall 2017 class on machine translation.", "title": "" } ]
scidocsrr
34bbcfce1c78182b1dd68e8efb7849e3
Anomaly detection using baseline and K-means clustering
[ { "docid": "fdc903a98097de8b7533b3e2fe209863", "text": "As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems—the cyberspace’s equivalent to the burglar alarm—join ranks with firewalls as one of the fundamental technologies for network security. However, today’s commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system/network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ‘‘zero day’’ attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "34993e22f91f3d5b31fe0423668a7eb1", "text": "K-means as a clustering algorithm has been studied in intrusion detection. However, with the deficiency of global search ability it is not satisfactory. Particle swarm optimization (PSO) is one of the evolutionary computation techniques based on swarm intelligence, which has high global search ability. So K-means algorithm based on PSO (PSO-KM) is proposed in this paper. Experiment over network connection records from KDD CUP 1999 data set was implemented to evaluate the proposed method. A Bayesian classifier was trained to select some fields in the data set. The experimental results clearly showed the outstanding performance of the proposed method", "title": "" } ]
[ { "docid": "514dd8425b91525cab1631ff8c358bbb", "text": "Embedded programming is typically made accessible through modular electronics toolkits. In this paper, we explore an alternative approach, combining microcontrollers with craft materials and processes as a means of bringing new groups of people and skills to technology production. We have developed simple and robust techniques for drawing circuits with conductive ink on paper, enabling off-the-shelf electronic components to be embedded directly into interactive artifacts. We have also developed an set of hardware and software tools -- an instance of what we call an \"untoolkit\" -- to provide an accessible toolchain for the programming of microcontrollers. We evaluated our techniques in a number of workshops, one of which is detailed in the paper. Four broader themes emerge: accessibility and appeal, the integration of craft and technology, microcontrollers vs. electronic toolkits, and the relationship between programming and physical artifacts. We also expand more generally on the idea of an untoolkit, offering a definition and some design principles, as well as suggest potential areas of future research.", "title": "" }, { "docid": "692adf7c8f656823a41b72350cf06269", "text": "Mindfulness-based interventions are increasingly used in the treatment and prevention of mental health conditions. Despite this, the mechanisms of change for such interventions are only beginning to be understood, with a number of recent studies assessing changes in brain activity. The aim of this systematic review was to assess changes in brain functioning associated with manualised 8-session mindfulness interventions. Searches of PubMed and Scopus databases resulted in 39 papers, 7 of which were eligible for inclusion. The most consistent longitudinal effect observed was increased insular cortex activity following mindfulness-based interventions. In contrast to previous reviews, we did not find robust evidence for increased activity in specific prefrontal cortex sub-regions. These findings suggest that mindfulness interventions are associated with changes in functioning of the insula, plausibly impacting awareness of internal reactions 'in-the-moment'. The studies reviewed here demonstrated a variety of effects across populations and tasks, pointing to the need for greater consistency in future study design.", "title": "" }, { "docid": "0153774b49121d8735cc3d33df69fc00", "text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.", "title": "" }, { "docid": "25c8d687e6044ae734270bb0d7fd8868", "text": "Continual learning broadly refers to the algorithms which aim to learn continuously over time across varying domains, tasks or data distributions. This is in contrast to algorithms restricted to learning a fixed number of tasks in a given domain, assuming a static data distribution. In this survey we aim to discuss a wide breadth of challenges faced in a continual learning setup and review existing work in the area. We discuss parameter regularization techniques to avoid catastrophic forgetting in neural networks followed by memory based approaches and the role of generative models in assisting continual learning algorithms. We discuss how dynamic neural networks assist continual learning by endowing neural networks with a new capacity to learn further. We conclude by discussing possible future directions.", "title": "" }, { "docid": "0784c4f87530aab020dbb8f15cba3127", "text": "As mechanical end-effectors, microgrippers enable the pick–transport–place of micrometer-sized objects, such as manipulation and positioning of biological cells in an aqueous environment. This paper reports on a monolithic MEMS-based microgripper with integrated force feedback along two axes and presents the first demonstration of forcecontrolled micro-grasping at the nanonewton force level. The system manipulates highly deformable biomaterials (porcine interstitial cells) in an aqueous environment using a microgripper that integrates a V-beam electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5 nN) and the other for gripping force measurements (force resolution: 19.9 nN). The MEMS-based microgripper and the force control system experimentally demonstrate the capability of rapid contact detection and reliable force-controlled micrograsping to accommodate variations in size and mechanical properties of objects with a high reproducibility. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "94b00d09c303d92a44c08fb211c7a8ed", "text": "Pull-Request (PR) is the primary method for code contributions from thousands of developers in GitHub. To maintain the quality of software projects, PR review is an essential part of distributed software development. Assigning new PRs to appropriate reviewers will make the review process more effective which can reduce the time between the submission of a PR and the actual review of it. However, reviewer assignment is now organized manually in GitHub. To reduce this cost, we propose a reviewer recommender to predict highly relevant reviewers of incoming PRs. Combining information retrieval with social network analyzing, our approach takes full advantage of the textual semantic of PRs and the social relations of developers. We implement an online system to show how the reviewer recommender helps project managers to find potential reviewers from crowds. Our approach can reach a precision of 74% for top-1 recommendation, and a recall of 71% for top-10 recommendation.", "title": "" }, { "docid": "fff53c626db93d568b4e9e6c13ef6f86", "text": "We give a correspondence between enriched categories and the Gauss-Kleene-Floyd-Warshall connection familiar to computer scientists. This correspondence shows this generalization of categories to be a close cousin to the generalization of transitive closure algorithms. Via this connection we may bring categorical and 2-categorical constructions into an active but algebraically impoverished arena presently served only by semiring constructions. We illustrate these techniques by applying them to Birkoff’s poset arithmetic, interpretable as an algebra of “true concurrency.” The Floyd-Warshall algorithm for generalized transitive closure [AHU74] is the code fragment for v do for u, w do δuw + = δuv · δvw. Here δuv denotes an entry in a matrix δ, or equivalently a label on the edge from vertex u to vertex v in a graph. When the matrix entries are truth values 0 or 1, with + and · interpreted respectively as ∨ and ∧, we have Warshall’s algorithm for computing the transitive closure δ+ of δ, such that δ+ uv = 1 just when there exists a path in δ from u to v. When the entries are nonnegative reals, with + as min and · as addition, we have Floyd’s algorithm for computing all shortest paths in a graph: δ+ uv is the minimum, over all paths from u to v in δ, of the sum of the edges of each path. Other instances of this algorithm include Kleene’s algorithm for translating finite automata into regular expressions, and Gauss’s algorithm for inverting a matrix, in each case with an appropriate choice of semiring. Not only are these algorithms the same up to interpretation of the data, but so are their correctness proofs. This begs for a unifying framework, which is found in the notion of semiring. A semiring is a structure differing from a ring principally in that its additive component is not a group but merely a monoid, see AHU [AHU74] for a more formal treatment. Other matrix problems and algorithms besides Floyd-Warshall, such as matrix multiplication and the various recursive divide-and-conquer approaches to closure, also lend themselves to this abstraction. This abstraction supports mainly vertex-preserving operations on such graphs. Typical operations are, given two graphs δ, on a common set of vertices, to form their pointwise sum δ + defined as (δ + )uv = δuv + uv, their matrix product δ defined as (δ )uv = δu− · −v (inner product), along with their transitive, symmetric, and reflexive closures, all on the same vertex set. We would like to consider other operations that combine distinct vertex sets in various ways. The two basic operations we have in mind are the disjoint union and cartesian product of such graphs, along with such variations of these operations as pasting (as not-so-disjoint union), concatenation (as a disjoint union with additional edges from one component to the other), etc. An efficient way to obtain a usefully large library of such operations is to impose an appropriate categorical structure on the collection of such graphs. In this paper we show how to use enriched categories to provide such structure while at the same time extending the notion of semiring to the more general notion of monoidal category. In so doing we find two layers of categorical structure: 1 enriched categories in the lower layer, as a generalization of graphs, and ordinary categories in the upper layer having enriched categories for its objects. The graph operations we want to define are expressible as limits and colimits in the upper (ordinary) categories. We first make a connection between the two universes of graph theory and category theory. We assume at the outset that vertices of graphs correspond to objects of categories, both for ordinary categories and enriched categories. The interesting part is how the edges are treated. The underlying graph U(C) of a category C consists of the objects and morphisms of C, with no composition law or identities. But there may be more than one morphism between any two vertices, whereas in graph theory one ordinarily allows just one edge. These “multigraphs” of category theory would therefore appear to be a more general notion than the directed graphs of graph theory. A staple of graph theory however is the label, whether on a vertex or an edge. If we regard a homset as an edge labeled with a set then a multigraph is the case of an edge-labeled graph where the labels are sets. So a multigraph is intermediate in generality between a directed graph and an edge-labeled directed graph. So starting from graphs whose edges are labeled with sets, we may pass to categories by specifying identities and a composition law, or we may pass to edge-labeled graphs by allowing other labels than sets. What is less obvious is that we can elegantly and usefully do both at once, giving rise to enriched categories. The basic ideas behind enriched categories can be traced to Mac Lane [Mac65], with much of the detail worked out by Eilenberg and Kelly [EK65], with the many subsequent developments condensed by Kelly [Kel82]. Lawvere [Law73] provides a highly readable account of the concepts. We require of the edge labels only that they form a monoidal category. Roughly speaking this is a set bearing the structure of both a category and a monoid. Formally a monoidal category D = 〈D,⊗, I, α, λ, ρ〉 is a category D = 〈D0,m, i〉, a functor ⊗:D2 → D, an object I of D, and three natural isomorphisms α: c ⊗ (d ⊗ e) → (c ⊗ d) ⊗ e, λ: I ⊗ d → d, and ρ: d ⊗ I → d. (Here c⊗ (d⊗ e) and (c⊗ d)⊗ e denote the evident functors from D3 to D, and similarly for I ⊗ d, d⊗ I and d as functors from D to D, where c, d, e are variables ranging over D.) These correspond to the three basic identities of the equational theory of monoids. To complete the definition of monoidal category we require a certain coherence condition, namely that the other identities of that theory be “generated” in exactly one way from these, see Mac Lane [Mac71] for details. A D-category, or (small) category enriched in a monoidal category D, is a quadruple 〈V, δ,m, i〉 consisting of a set V (which we think of as vertices of a graph), a function δ:V 2 → D0 (the edgelabeling function), a family m of morphisms muvw: δ(u, v)⊗δ(v, w) → δ(u, w) of D (the composition law), and a family i of morphisms iu: I → δ(u, u) (the identities), satisfying the following diagrams. (δ(u, v)⊗ δ(v, w))⊗ δ(w, x) αδ(u,v)δ(v,w)δ(w,x) > δ(u, v)⊗ (δ(v, w)⊗ δ(w, x)) muvw ⊗ 1 ∨ 1⊗mvwx ∨ δ(u, w)⊗ δ(w, x) muwx > δ(u, x) < muvx δ(u, v)⊗ δ(v, x)", "title": "" }, { "docid": "7757fe9470f4def8fcec8021b3974519", "text": "Reaction prediction and retrosynthesis are the cornerstones of organic chemistry. Rule-based expert systems have been the most widespread approach to computationally solve these two related challenges to date. However, reaction rules often fail because they ignore the molecular context, which leads to reactivity conflicts. Herein, we report that deep neural networks can learn to resolve reactivity conflicts and to prioritize the most suitable transformation rules. We show that by training our model on 3.5 million reactions taken from the collective published knowledge of the entire discipline of chemistry, our model exhibits a top10-accuracy of 95 % in retrosynthesis and 97 % for reaction prediction on a validation set of almost 1 million reactions.", "title": "" }, { "docid": "3deced64cd17210f7e807e686c0221af", "text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.", "title": "" }, { "docid": "1227c910d47e61be05def5e80e462688", "text": "Motivation\nThe identification of novel drug-target (DT) interactions is a substantial part of the drug discovery process. Most of the computational methods that have been proposed to predict DT interactions have focused on binary classification, where the goal is to determine whether a DT pair interacts or not. However, protein-ligand interactions assume a continuum of binding strength values, also called binding affinity and predicting this value still remains a challenge. The increase in the affinity data available in DT knowledge-bases allows the use of advanced learning techniques such as deep learning architectures in the prediction of binding affinities. In this study, we propose a deep-learning based model that uses only sequence information of both targets and drugs to predict DT interaction binding affinities. The few studies that focus on DT binding affinity prediction use either 3D structures of protein-ligand complexes or 2D features of compounds. One novel approach used in this work is the modeling of protein sequences and compound 1D representations with convolutional neural networks (CNNs).\n\n\nResults\nThe results show that the proposed deep learning based model that uses the 1D representations of targets and drugs is an effective approach for drug target binding affinity prediction. The model in which high-level representations of a drug and a target are constructed via CNNs achieved the best Concordance Index (CI) performance in one of our larger benchmark datasets, outperforming the KronRLS algorithm and SimBoost, a state-of-the-art method for DT binding affinity prediction.\n\n\nAvailability and implementation\nhttps://github.com/hkmztrk/DeepDTA.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "536d4a66e0e60b810e758dedf56ea5a9", "text": "Erasure coding is an established data protection mechanism. It provides high resiliency with low storage overhead, which makes it very attractive to storage systems developers. Unfortunately, when used in a distributed setting, erasure coding hampers a storage system's performance, because it requires clients to contact several, possibly remote sites to retrieve their data. This has hindered the adoption of erasure coding in practice, limiting its use to cold, archival data. Recent research showed that it is feasible to use erasure coding for hot data as well, thus opening new perspectives for improving erasure-coded storage systems. In this paper, we address the problem of minimizing access latency in erasure-coded storage. We propose Agar-a novel caching system tailored for erasure-coded content. Agar optimizes the contents of the cache based on live information regarding data popularity and access latency to different data storage sites. Our system adapts a dynamic programming algorithm to optimize the choice of data blocks that are cached, using an approach akin to \"Knapsack\" algorithms. We compare Agar to the classical Least Recently Used and Least Frequently Used cache eviction policies, while varying the amount of data cached between a data chunk and a whole replica of the object. We show that Agar can achieve 16% to 41% lower latency than systems that use classical caching policies.", "title": "" }, { "docid": "6921cd9c2174ca96ec0061ae2dd881eb", "text": "Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games' popularity and dedicated fan base are testaments to the allure of novel social interactions granted to people by allowing them an alternative life as a new character and persona. In this paper we investigate the phenomenon of \"gender swapping,\" which refers to players choosing avatars of genders opposite to their natural ones. We report the behavioral patterns observed in players of Fairyland Online, a globally serviced MMORPG, during social interactions when playing as in-game avatars of their own real gender or gender-swapped. We also discuss the effect of gender role and self-image in virtual social situations and the potential of our study for improving MMORPG quality and detecting online identity frauds.", "title": "" }, { "docid": "d7156d395b4bf8b3fc7b5a7472b30a66", "text": "Multimodal affective computing, learning to recognize and interpret human affect and subjective information from multiple data sources, is still challenging because:(i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract levels, ignoring time-dependent interactions between modalities. Addressing such issues, we introduce a hierarchical multimodal architecture with attention and word-level fusion to classify utterance-level sentiment and emotion from text and audio data. Our introduced model outperforms state-of-the-art approaches on published datasets, and we demonstrate that our model's synchronized attention over modalities offers visual interpretability.", "title": "" }, { "docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1", "text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.", "title": "" }, { "docid": "3bd55f1a745aae146bb29e63b51fa85a", "text": "Employing mixed-method approach, this case study examined the in situ use of educational computer games in a summer math program to facilitate 4th and 5th graders’ cognitive math achievement, metacognitive awareness, and positive attitudes toward math learning. The results indicated that students developed more positive attitudes toward math learning through five-week computer math gaming, but there was no significant effect of computer gaming on students’ cognitive test performance or metacognitive awareness development. The in-field observation and students’ think-aloud protocol informed that not every computer math drill game would engage children in committed learning. The study findings have highlighted the value of situating learning activities within the game story, making games pleasantly challenging, scaffolding reflections, and designing suitable off-computer activities. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5cf396e42e8708d768235f95bc8f227f", "text": "This thesis examines how artificial neural networks can benefit a large vocabulary, speaker independent, continuous speech recognition system. Currently, most speech recognition systems are based on hidden Markov models (HMMs), a statistical framework that supports both acoustic and temporal modeling. Despite their state-of-the-art performance, HMMs make a number of suboptimal modeling assumptions that limit their potential effectiveness. Neural networks avoid many of these assumptions, while they can also learn complex functions, generalize effectively, tolerate noise, and support parallelism. While neural networks can readily be applied to acoustic modeling, it is not yet clear how they can be used for temporal modeling. Therefore, we explore a class of systems called NN-HMM hybrids, in which neural networks perform acoustic modeling, and HMMs perform temporal modeling. We argue that a NN-HMM hybrid has several theoretical advantages over a pure HMM system, including better acoustic modeling accuracy, better context sensitivity, more natural discrimination, and a more economical use of parameters. These advantages are confirmed experimentally by a NN-HMM hybrid that we developed, based on context-independent phoneme models, that achieved 90.5% word accuracy on the Resource Management database, in contrast to only 86.0% accuracy achieved by a pure HMM under similar conditions. In the course of developing this system, we explored two different ways to use neural networks for acoustic modeling: prediction and classification. We found that predictive networks yield poor results because of a lack of discrimination, but classification networks gave excellent results. We verified that, in accordance with theory, the output activations of a classification network form highly accurate estimates of the posterior probabilities P(class|input), and we showed how these can easily be converted to likelihoods P(input|class) for standard HMM recognition algorithms. Finally, this thesis reports how we optimized the accuracy of our system with many natural techniques, such as expanding the input window size, normalizing the inputs, increasing the number of hidden units, converting the network’s output activations to log likelihoods, optimizing the learning rate schedule by automatic search, backpropagating error from word level outputs, and using gender dependent networks.", "title": "" }, { "docid": "e43cc845368e69ef1278e7109d4d8d6f", "text": "Estimating six degrees of freedom poses of a planar object from images is an important problem with numerous applications ranging from robotics to augmented reality. While the state-of-the-art Perspective-n-Point algorithms perform well in pose estimation, the success hinges on whether feature points can be extracted and matched correctly on target objects with rich texture. In this work, we propose a two-step robust direct method for six-dimensional pose estimation that performs accurately on both textured and textureless planar target objects. First, the pose of a planar target object with respect to a calibrated camera is approximately estimated by posing it as a template matching problem. Second, each object pose is refined and disambiguated using a dense alignment scheme. Extensive experiments on both synthetic and real datasets demonstrate that the proposed direct pose estimation algorithm performs favorably against state-of-the-art feature-based approaches in terms of robustness and accuracy under varying conditions. Furthermore, we show that the proposed dense alignment scheme can also be used for accurate pose tracking in video sequences.", "title": "" }, { "docid": "fd4bddf9a5ff3c3b8577c46249bec915", "text": "In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars the balanced parenthesis language, 1 n0n, and the deterministic Palindrome the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.", "title": "" }, { "docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd", "text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19", "title": "" }, { "docid": "9818399b4c119b58723c59e76bbfc1bd", "text": "Many vertex-centric graph algorithms can be expressed using asynchronous parallelism by relaxing certain read-after-write data dependences and allowing threads to compute vertex values using stale (i.e., not the most recent) values of their neighboring vertices. We observe that on distributed shared memory systems, by converting synchronous algorithms into their asynchronous counterparts, algorithms can be made tolerant to high inter-node communication latency. However, high inter-node communication latency can lead to excessive use of stale values causing an increase in the number of iterations required by the algorithms to converge. Although by using bounded staleness we can restrict the slowdown in the rate of convergence, this also restricts the ability to tolerate communication latency. In this paper we design a relaxed memory consistency model and consistency protocol that simultaneously tolerate communication latency and minimize the use of stale values. This is achieved via a coordinated use of best effort refresh policy and bounded staleness. We demonstrate that for a range of asynchronous graph algorithms and PDE solvers, on an average, our approach outperforms algorithms based upon: prior relaxed memory models that allow stale values by at least 2.27x; and Bulk Synchronous Parallel (BSP) model by 4.2x. We also show that our approach frequently outperforms GraphLab, a popular distributed graph processing framework.", "title": "" } ]
scidocsrr
24d3cd7173712d836ffeebb8d32e8c99
Product Barcode and Expiry Date Detection for the Visually Impaired Using a Smartphone
[ { "docid": "e8f33b4e500d8299aa803e72298d52ab", "text": "While there are many barcode readers available for identifying products in a supermarket or at home on mobile phones (e.g., Red Laser iPhone app), such readers are inaccessible to blind or visually impaired persons because of their reliance on visual feedback from the user to center the barcode in the camera's field of view. We describe a mobile phone application that guides a visually impaired user to the barcode on a package in real-time using the phone's built-in video camera. Once the barcode is located by the system, the user is prompted with audio signals to bring the camera closer to the barcode until it can be resolved by the camera, which is then decoded and the corresponding product information read aloud using text-to-speech. Experiments with a blind volunteer demonstrate proof of concept of our system, which allowed the volunteer to locate barcodes which were then translated to product information that was announced to the user. We successfully tested a series of common products, as well as user-generated barcodes labeling household items that may not come with barcodes.", "title": "" } ]
[ { "docid": "e33080761e4ece057f455148c7329d5e", "text": "This paper compares the utilization of ConceptNet and WordNet in query expansion. Spreading activation selects candidate terms for query expansion from these two resources. Three measures including discrimination ability, concept diversity, and retrieval performance are used for comparisons. The topics and document collections in the ad hoc track of TREC-6, TREC-7 and TREC-8 are adopted in the experiments. The results show that ConceptNet and WordNet are complementary. Queries expanded with WordNet have higher discrimination ability. In contrast, queries expanded with ConceptNet have higher concept diversity. The performance of queries expanded by selecting the candidate terms from ConceptNet and WordNet outperforms that of queries without expansion, and queries expanded with a single resource.", "title": "" }, { "docid": "40e06996a22e1de4220a09e65ac1a04d", "text": "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.", "title": "" }, { "docid": "405cd35764b8ae0b380e85a58a9714bf", "text": "This work is aimed at modeling, designing and developing an egg incubator system that is able to incubate various types of egg within the temperature range of 35 – 40 0 C. This system uses temperature and humidity sensors that can measure the condition of the incubator and automatically change to the suitable condition for the egg. Extreme variations in incubation temperature affect the embryo and ultimately, post hatch performance. In this work, electric bulbs were used to give the suitable temperature to the egg whereas water and controlling fan were used to ensure that humidity and ventilation were in good condition. LCD is used to display status condition of the incubator and an interface (Keypad) is provided to key in the appropriate temperature range for the egg. To ensure that all part of the eggs was heated by the lamp, DC motor was used to rotate iron rod at the bottom side and automatically change position of the egg. The entire element is controlled using AT89C52 Microcontroller. The temperature of the incubator is maintained at the normal temperature using PID controller implemented in microcontroller. Mathematical model of the incubator, actuator and PID controller were developed. Controller design based on the models was developed using Matlab Simulink. The models were validated through simulation and the Zeigler-Nichol tuning method was adopted as the tuning technique for varying the temperature control parameters of the PID controller in order to achieve a desirable transient response of the system when subjected to a unit step input. After several assumptions and simulations, a set of optimal parameters were obtained at the result of the third test that exhibited a commendable improvement in the overshoot, rise time, peak time and settling time thus improving the robustness and stability of the system. Keyword: Egg Incubator System, AT89C52 Microcontroller, PID Controller, Temperature Sensor.", "title": "" }, { "docid": "c89b94565b7071420017deae01295e23", "text": "Using cross-sectional data from three waves of the Youth Tobacco Policy Study, which examines the impact of the UK's Tobacco Advertising and Promotion Act (TAPA) on adolescent smoking behaviour, we examined normative pathways between tobacco marketing awareness and smoking intentions. The sample comprised 1121 adolescents in Wave 2 (pre-ban), 1123 in Wave 3 (mid-ban) and 1159 in Wave 4 (post-ban). Structural equation modelling was used to assess the direct effect of tobacco advertising and promotion on intentions at each wave, and also the indirect effect, mediated through normative influences. Pre-ban, higher levels of awareness of advertising and promotion were independently associated with higher levels of perceived sibling approval which, in turn, was positively related to intentions. Independent paths from perceived prevalence and benefits fully mediated the effects of advertising and promotion awareness on intentions mid- and post-ban. Advertising awareness indirectly affected intentions via the interaction between perceived prevalence and benefits pre-ban, whereas the indirect effect on intentions of advertising and promotion awareness was mediated by the interaction of perceived prevalence and benefits mid-ban. Our findings indicate that policy measures such as the TAPA can significantly reduce adolescents' smoking intentions by signifying smoking to be less normative and socially unacceptable.", "title": "" }, { "docid": "d805dc116db48b644b18e409dda3976e", "text": "Based on previous cross-sectional findings, we hypothesized that weight loss could improve several hemostatic factors associated with cardiovascular disease. In a randomized controlled trial, moderately overweight men and women were assigned to one of four weight loss treatment groups or to a control group. Measurements of plasminogen activator inhibitor-1 (PAI-1) antigen, tissue-type plasminogen activator (t-PA) antigen, D-dimer antigen, factor VII activity, fibrinogen, and protein C antigens were made at baseline and after 6 months in 90 men and 88 women. Net treatment weight loss was 9.4 kg in men and 7.4 kg in women. There was no net change (p > 0.05) in D-dimer, fibrinogen, or protein C with weight loss. Significant (p < 0.05) decreases were observed in the combined treatment groups compared with the control group for mean PAI-1 (31% decline), t-PA antigen (24% decline), and factor VII (11% decline). Decreases in these hemostatic variables were correlated with the amount of weight lost and the degree that plasma triglycerides declined; these correlations were stronger in men than women. These findings suggest that weight loss can improve abnormalities in hemostatic factors associated with obesity.", "title": "" }, { "docid": "67067043e630f3ef5d466c66a88b72ab", "text": "This paper reports an LC-based digitally controlled oscillator (DCO) using novel varactor pairs. Proposed DCO has high frequency resolution with low phase noise in 5.9 GHz. The DCO exploits the difference between the accumulation region capacitance and inversion region capacitance of two PMOS varactors. The novel varactor pairs make much smaller switchable capacitance than those of other approaches, and hence the DCO achieves the high frequency resolution and low phase noise. Also, identical sizes of PMOS varactor make them robust from process variation. The DCO implemented in 0.18 um CMOS process operates from 5.7 GHz to 6.3 GHz with 14 kHz frequency resolution which indicates the unit switchable capacitance of 3.5 aF. The designed DCO achieves a low phase-noise of −117 dBc/Hz at 1 MHz offset.", "title": "" }, { "docid": "c58d0f8105b1b8a439b90fd1d366a87c", "text": "Let F be a totally real field and χ an abelian totally odd character of F . In 1988, Gross stated a p-adic analogue of Stark’s conjecture that relates the value of the derivative of the p-adic L-function associated to χ and the p-adic logarithm of a p-unit in the extension of F cut out by χ. In this paper we prove Gross’s conjecture when F is a real quadratic field and χ is a narrow ring class character. The main result also applies to general totally real fields for which Leopoldt’s conjecture holds, assuming that either there are at least two primes above p in F , or that a certain condition relating the L invariants of χ and χ−1 holds. This condition on L -invariants is always satisfied when χ is quadratic.", "title": "" }, { "docid": "1d0d5ad5371a3f7b8e90fad6d5299fa7", "text": "Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (\"dead ends\"), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.", "title": "" }, { "docid": "3668b5394b68a6dfc82951121ebdda8d", "text": "Now a day the usage of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. Various techniques like classification, clustering and apriori of web mining will be integrated to represent the sequence of operations in credit card transaction processing and show how it can be used for the detection of frauds. Initially, web mining techniques trained with the normal behaviour of a cardholder. If an incoming credit card transaction is not accepted by the web mining model with sufficiently high probability, it is considered to be fraudulent. At the same time, the system will try to ensure that genuine transactions will not be rejected. Using data from a credit card issuer, a web mining model based fraud detection system will be trained on a large sample of labelled credit card account transactions and tested on a holdout data set that consisted of all account activity. Web mining techniques can be trained on examples of fraud due to lost cards, stolen cards, application fraud, counterfeit fraud, and mail-order fraud. The proposed system will be able to detect frauds by considering a cardholder‟s spending habit without its significance. Usually, the details of items purchased in individual transactions are not known to any Fraud Detection System. The proposed system will be an ideal choice for addressing this problem of current fraud detection system. Another important advantage of proposed system will be a drastic reduction in the number of False Positives transactions. FDS module of proposed system will receive the card details and the value of purchase to verify, whether the transaction is genuine or not. If the Fraud Detection System module will confirm the transaction to be of fraud, it will raise an alarm, and the transaction will be declined.", "title": "" }, { "docid": "f9b7965888e180c6b07764dae8433a9d", "text": "Job recommender systems are designed to suggest a ranked list of jobs that could be associated with employee's interest. Most of existing systems use only one approach to make recommendation for all employees, while a specific method normally is good enough for a group of employees. Therefore, this study proposes an adaptive solution to make job recommendation for different groups of user. The proposed methods are based on employee clustering. Firstly, we group employees into different clusters. Then, we select a suitable method for each user cluster based on empirical evaluation. The proposed methods include CB-Plus, CF-jFilter and HyR-jFilter have applied for different three clusters. Empirical results show that our proposed methods is outperformed than traditional methods.", "title": "" }, { "docid": "08ab7142ae035c3594d3f3ae339d3e27", "text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.", "title": "" }, { "docid": "2abd75766d4875921edd4d6d63d5d617", "text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.", "title": "" }, { "docid": "1e934aef7999b592971b393e40395994", "text": "Over recent years, as the popularity of mobile phone devices has increased, Short Message Service (SMS) has grown into a multi-billion dollars industry. At the same time, reduction in the cost of messaging services has resulted in growth in unsolicited commercial advertisements (spams) being sent to mobile phones. In parts of Asia, up to 30% of text messages were spam in 2012. Lack of real databases for SMS spams, short length of messages and limited features, and their informal language are the factors that may cause the established email filtering algorithms to underperform in their classification. In this project, a database of real SMS Spams from UCI Machine Learning repository is used, and after preprocessing and feature extraction, different machine learning techniques are applied to the database. Finally, the results are compared and the best algorithm for spam filtering for text messaging is introduced. Final simulation results using 10-fold cross validation shows the best classifier in this work reduces the overall error rate of best model in original paper citing this dataset by more than half.", "title": "" }, { "docid": "10e24047026cc4a062b08fc28468bbff", "text": "This comparative analysis of teacher-student interaction in two different instructional settings at the elementary-school level (18.3 hr in French immersion and 14.8 hr Japanese immersion) investigates the immediate effects of explicit correction, recasts, and prompts on learner uptake and repair. The results clearly show a predominant provision of recasts over prompts and explicit correction, regardless of instructional setting, but distinctively varied student uptake and repair patterns in relation to feedback type, with the largest proportion of repair resulting from prompts in French immersion and from recasts in Japanese immersion. Based on these findings and supported by an analysis of each instructional setting’s overall communicative orientation, we introduce the counterbalance hypothesis, which states that instructional activities and interactional feedback that act as a counterbalance to a classroom’s predominant communicative orientation are likely to prove more effective than instructional activities and interactional feedback that are congruent with its predominant communicative orientation.", "title": "" }, { "docid": "0e8cde83260d6ca4d8b3099628c25fc2", "text": "1Department of Molecular Virology, Immunology and Medical Genetics, The Ohio State University Medical Center, Columbus, Ohio, USA. 2Department of Physics, Pohang University of Science and Technology, Pohang, Korea. 3School of Interdisciplinary Bioscience and Bioengineering, Pohang, Korea. 4Physics Department, The Ohio State University, Columbus, Ohio, USA. 5These authors contributed equally to this work. e-mail: fishel.7@osu.edu", "title": "" }, { "docid": "7e5cd1252d95bb095e7fabd54211fc38", "text": "Interorganizational information systems, i.e., systems spanning more than a single organization, are proliferating as companies become aware of the potential of these systems to affect interorganizational interactions in terms of economic efficiency and strategic conduct. This new technology can have far-reaching impacts on the structure of entire industries. This article identifies two types of interorganizational information systems, information links and electronic markets. It then explores how economic models can be employed to study the implications of information links for the coordination of individual organizations with their customers and their suppliers, and the implications of electronic market systems for efficiency and competition in vertical markets. Finally, the strategic significance of interorganizational systems is addressed, and certain potential long-term impacts on the structure of markets, industries and organizations are discussed. This research was supported in part with funding from an Irvine Faculty Research Fellowship and from the National Science Foundation (Grant Number IRI-9015497). The author is grateful to the three anonymous referees for their valuable comments during the review process.", "title": "" }, { "docid": "c1fc1a31d9f5033a7469796d1222aef3", "text": "Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.", "title": "" }, { "docid": "924146534d348e7a44970b1d78c97e9c", "text": "Little is known of the extent to which heterosexual couples are satisfied with their current frequency of sex and the degree to which this predicts overall sexual and relationship satisfaction. A population-based survey of 4,290 men and 4,366 women was conducted among Australians aged 16 to 64 years from a range of sociodemographic backgrounds, of whom 3,240 men and 3,304 women were in regular heterosexual relationships. Only 46% of men and 58% of women were satisfied with their current frequency of sex. Dissatisfied men were overwhelmingly likely to desire sex more frequently; among dissatisfied women, only two thirds wanted sex more frequently. Age was a significant factor but only for men, with those aged 35-44 years tending to be least satisfied. Men and women who were dissatisfied with their frequency of sex were also more likely to express overall lower sexual and relationship satisfaction. The authors' findings not only highlight desired frequency of sex as a major factor in satisfaction, but also reveal important gender and other sociodemographic differences that need to be taken into account by researchers and therapists seeking to understand and improve sexual and relationship satisfaction among heterosexual couples. Other issues such as length of time spent having sex and practices engaged in may also be relevant, particularly for women.", "title": "" }, { "docid": "bba99d325be71a13de31a1c70447e530", "text": "Search engine researchers typically depict search as the solitary activity of an individual searcher. In contrast, results from our critical-incident survey of 150 users on Amazon's Mechanical Turk service suggest that social interactions play an important role throughout the search process. Our main contribution is that we have integrated models from previous work in sensemaking and information seeking behavior to present a canonical social model of user activities before, during, and after search, suggesting where in the search process both explicitly and implicitly shared information may be valuable to individual searchers.", "title": "" }, { "docid": "33465b87cdc917904d16eb9d6cb8fece", "text": "An audio fingerprint is a compact content-based signature that summarizes an audio recording. Audio Fingerprinting technologies have attracted attention since they allow the identification of audio independently of its format and without the need of meta-data or watermark embedding. Other uses of fingerprinting include: integrity verification, watermark support and content-based audio retrieval. The different approaches to fingerprinting have been described with different rationales and terminology: Pattern matching, Multimedia (Music) Information Retrieval or Cryptography (Robust Hashing). In this paper, we review different techniques describing its functional blocks as parts of a common, unified framework.", "title": "" } ]
scidocsrr
cab295fa3f02872eb2dd23a2e34aaf22
Automatic playtesting for game parameter tuning via active learning
[ { "docid": "f672af55234d85a113e45fcb65a2149f", "text": "In recent years, the fields of Interactive Storytelling and Player Modelling have independently enjoyed increased interest in both academia and the computer games industry. The combination of these technologies, however, remains largely unexplored. In this paper, we present PaSSAGE (PlayerSpecific Stories via Automatically Generated Events), an interactive storytelling system that uses player modelling to automatically learn a model of the player’s preferred style of play, and then uses that model to dynamically select the content of an interactive story. Results from a user study evaluating the entertainment value of adaptive stories created by our system as well as two fixed, pre-authored stories indicate that automatically adapting a story based on learned player preferences can increase the enjoyment of playing a computer role-playing game for certain types of players.", "title": "" }, { "docid": "326493520ccb5c8db07362f412f57e62", "text": "This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.", "title": "" } ]
[ { "docid": "e07756fb1ae9046c3b8c29b85a00bf0f", "text": "We present a clustering scheme that combines a mode-seeking phase with a cluster merging phase in the corresponding density map. While mode detection is done by a standard graph-based hill-climbing scheme, the novelty of our approach resides in its use of topological persistence to guide the merging of clusters. Our algorithm provides additional feedback in the form of a set of points in the plane, called a persistence diagram (PD), which provably reflects the prominences of the modes of the density. In practice, this feedback enables the user to choose relevant parameter values, so that under mild sampling conditions the algorithm will output the correct number of clusters, a notion that can be made formally sound within persistence theory. In addition, the output clusters have the property that their spatial locations are bound to the ones of the basins of attraction of the peaks of the density.\n The algorithm only requires rough estimates of the density at the data points, and knowledge of (approximate) pairwise distances between them. It is therefore applicable in any metric space. Meanwhile, its complexity remains practical: although the size of the input distance matrix may be up to quadratic in the number of data points, a careful implementation only uses a linear amount of memory and takes barely more time to run than to read through the input.", "title": "" }, { "docid": "0019353f6d685f459516bccaa9d1f187", "text": "Since the Global Positioning System (GPS) was launched, significant progress has been made in GPS receiver technology but the multipath error remains an unsolved problem. As solutions based on signal processing are not adequate, the most effective approach to discriminate between direct and multipath waves is to specify new and more restrictive criteria in the design of the receiving antenna. An innovative low profile, lightweight dual band (L1+L2) GPS radiator with a high multipath-rejection capability is presented. The proposed solution has been realized by two stacked shorted annular elliptical patch antennas. In what follows, a detailed account of the design process and antenna performances is given, presenting both simulated and experimental results.", "title": "" }, { "docid": "c105fdde48fdcbab369dc9698dc9fce9", "text": "Social link identification SIL, that is to identify accounts across different online social networks that belong to the same user, is an important task in social network applications. Most existing methods to solve this problem directly applied machine-learning classifiers on features extracted from user’s rich information. In practice, however, only some limited user information can be obtained because of privacy concerns. In addition, we observe the existing methods cannot handle huge amount of potential account pairs from different OSNs. In this paper, we propose an effective SIL method to address the above two challenges by expanding known anchor links (seed account pairs belonging to the same person). In particular, we leverage potentially useful information possessed by the existing anchor link, and then develop a local expansion model to identify new social links, which are taken as a generated anchor link to be used for iteratively identifying additional new social link. We evaluate our method on two most popular Chinese social networks. Experimental results show our proposed method achieves much better performance in terms of both the number of correct account pairs and efficiency.", "title": "" }, { "docid": "7908e315d84cf916fb4a61a083be7fe6", "text": "A base station antenna with dual-broadband and dual-polarization characteristics is presented in this letter. The proposed antenna contains four parts: a lower-band element, an upper-band element, arc-shaped baffle plates, and a box-shaped reflector. The lower-band element consists of two pairs of dipoles with additional branches for bandwidth enhancement. The upper-band element embraces two crossed hollow dipoles and is nested inside the lower-band element. Four arc-shaped baffle plates are symmetrically arranged on the reflector for isolating the lower- and upper-band elements and improving the radiation performance of upper-band element. As a result, the antenna can achieve a bandwidth of 50.6% for the lower band and 48.2% for the upper band when the return loss is larger than 15 dB, fully covering the frequency ranges 704–960 and 1710–2690 MHz for 2G/3G/4G applications. Measured port isolation larger than 27.5 dB in both the lower and upper bands is also obtained. At last, an array that consists of two lower-band elements and five upper-band elements is discussed for giving an insight into the future array design.", "title": "" }, { "docid": "ec1e79530ef20e2d8610475d07ee140d", "text": "a School of Social Sciences, Faculty of Health, Education and Social Sciences, University of the West of Scotland, High St., Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom b School of Computing, Faculty of Science and Technology, University of the West of Scotland, Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom c School of Psychological Sciences and Health, Faculty of Humanities and Social Science, University of Strathclyde, Glasgow, Scotland, United Kingdom", "title": "" }, { "docid": "8b4e1dde6a9c004ae6095d3ff5232595", "text": "The authors tested the effect of ambient scents in a shopping mall environment. Two competing models were used. The first model is derived from the environmental psychology research stream by Mehrabian and Russel (1974) and Donovan and Rossiter (1982) where atmospheric cues generate pleasure and arousal, and, in turn, an approach/avoidance behavior. The emotion–cognition model is supported by Zajonc and Markus (1984). The second model to be tested is based on Lazarus’ (1991) cognitive theory of emotions. In this latter model, shoppers’ perceptions of the retail environment and product quality mediate the effects of ambient scent cues on emotions and spending behaviors. Positive affect is enhanced from shoppers’ evaluations. Using structural equation modeling the authors conclude that the cognitive theory of emotions better explains the effect of ambient scent. Managerial implications are discussed. D 2003 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "4efa56d9c2c387608fe9ddfdafca0f9a", "text": "Accurate cardinality estimates are essential for a successful query optimization. This is not only true for relational DBMSs but also for RDF stores. An RDF database consists of a set of triples and, hence, can be seen as a relational database with a single table with three attributes. This makes RDF rather special in that queries typically contain many self joins. We show that relational DBMSs are not well-prepared to perform cardinality estimation in this context. Further, there are hardly any special cardinality estimation methods for RDF databases. To overcome this lack of appropriate cardinality estimation methods, we introduce characteristic sets together with new cardinality estimation methods based upon them. We then show experimentally that the new methods are-in the RDF context-highly superior to the estimation methods employed by commercial DBMSs and by the open-source RDF store RDF-3X.", "title": "" }, { "docid": "4d6a7fc4bf89fb576142f6f4a0559db9", "text": "In this research, we propose a particular version of KNN (K Nearest Neighbor) where the similarity between feature vectors is computed considering the similarity among attributes or features as well as one among values. The task of text summarization is viewed into the binary classification task where each paragraph or sentence is classified into the essence or non-essence, and in previous works, improved results are obtained by the proposed version in the text classification and clustering. In this research, we define the similarity which considers both attributes and attribute values, modify the KNN into the version based on the similarity, and use the modified version as the approach to the text summarization task. As the benefits from this research, we may expect the more compact representation of data items and the better performance. Therefore, the goal of this research is to implement the text summarization algorithm which represents data items more compactly and provides the more reliability.", "title": "" }, { "docid": "a089f48b99c192f385c287ae98f297ae", "text": "Video object segmentation targets segmenting a specific object throughout a video sequence when given only an annotated first frame. Recent deep learning based approaches find it effective to fine-tune a general-purpose segmentation model on the annotated frame using hundreds of iterations of gradient descent. Despite the high accuracy that these methods achieve, the fine-tuning process is inefficient and fails to meet the requirements of real world applications. We propose a novel approach that uses a single forward pass to adapt the segmentation model to the appearance of a specific object. Specifically, a second meta neural network named modulator is trained to manipulate the intermediate layers of the segmentation network given limited visual and spatial information of the target object. The experiments show that our approach is 70× faster than fine-tuning approaches and achieves similar accuracy. Our model and code have been released at https://github.com/linjieyangsc/video_seg.", "title": "" }, { "docid": "7b6640e2d964ef3ee2597df9eed52073", "text": "Differential Fault Analysis (DFA), aided by sophisticated mathematical analysis techniques for ciphers and precise fault injection methodologies, has become a potent threat to cryptographic implementations. In this paper, we propose, to the best of the our knowledge, the first “DFA-aware” physical design automation methodology, that effectively mitigates the threat posed by DFA. We first develop a novel floorplan heuristic, which resists the simultaneous corruption of cipher states necessary for successful fault attack, by exploiting the fact that most fault injections are localized in practice. Our technique results in the computational complexity of the fault attack to shoot up to exhaustive search levels, making them practically infeasible. In the second part of the work, we develop a routing mechanism, which tackles more precise and costly fault injection techniques, like laser and electromagnetic guns. We propose a routing technique by integrating a specially designed ring oscillator based sensor circuit around the potential fault attack targets without incurring any performance overhead. We demonstrate the effectiveness of our technique by applying it on state of the art ciphers.", "title": "" }, { "docid": "00b80ec74135b3190a50b4e0d83af17a", "text": "Many organizations aspire to adopt agile processes to take advantage of the numerous benefits that they offer to an organization. Those benefits include, but are not limited to, quicker return on investment, better software quality, and higher customer satisfaction. To date, however, there is no structured process (at least that is published in the public domain) that guides organizations in adopting agile practices. To address this situation, we present the agile adoption framework and the innovative approach we have used to implement it. The framework consists of two components: an agile measurement index, and a four-stage process, that together guide and assist the agile adoption efforts of organizations. More specifically, the Sidky Agile Measurement Index (SAMI) encompasses five agile levels that are used to identify the agile potential of projects and organizations. The four-stage process, on the other hand, helps determine (a) whether or not organizations are ready for agile adoption, and (b) guided by their potential, what set of agile practices can and should be introduced. To help substantiate the “goodness” of the Agile Adoption Framework, we presented it to various members of the agile community, and elicited responses through questionnaires. The results of that substantiation effort are encouraging, and are also presented in this paper.", "title": "" }, { "docid": "371ab18488da4e719eda8838d0d42ba8", "text": "Research reveals dramatic differences in the ways that people from different cultures perceive the world around them. Individuals from Western cultures tend to focus on that which is object-based, categorically related, or self-relevant whereas people from Eastern cultures tend to focus more on contextual details, similarities, and group-relevant information. These different ways of perceiving the world suggest that culture operates as a lens that directs attention and filters the processing of the environment into memory. The present review describes the behavioral and neural studies exploring the contribution of culture to long-term memory and related processes. By reviewing the extant data on the role of various neural regions in memory and considering unifying frameworks such as a memory specificity approach, we identify some promising directions for future research.", "title": "" }, { "docid": "eda40814ecaecbe5d15ccba49f8a0d43", "text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem", "title": "" }, { "docid": "d5017531ec03b489b565f3c517d4756e", "text": "Layouts are important for graphic design and scene generation. We propose a novel generative adversarial network, named as LayoutGAN, that synthesizes graphic layouts by modeling semantic and geometric relations of 2D elements. The generator of LayoutGAN takes as input a set of randomly placed 2D graphic elements and uses self-attention modules to refine their semantic and geometric parameters jointly to produce a meaningful layout. Accurate alignment is critical for good layouts. We thus propose a novel differentiable wireframe rendering layer that maps the generated layout to a wireframe image, upon which a CNNbased discriminator is used to optimize the layouts in visual domain. We validate the effectiveness of LayoutGAN in various experiments including MNIST digit generation, document layout generation, clipart abstract scene generation and tangram graphic design.", "title": "" }, { "docid": "74373dd009fc6285b8f43516d8e8bf2c", "text": "Computational speech reconstruction algorithms have the ultimate aim of returning natural sounding speech to aphonic and dysphonic patients as well as those who can only whisper. In particular, individuals who have lost glottis function due to disease or surgery, retain the power of vocal tract modulation to some degree but they are unable to speak anything more than hoarse whispers without prosthetic aid. While whispering can be seen as a natural and secondary aspect of speech communications for most people, it becomes the primary mechanism of communications for those who have impaired voice production mechanisms, such as laryngectomees. In this paper, by considering the current limitations of speech reconstruction methods, a novel algorithm for converting whispers to normal speech is proposed and the efficiency of the algorithm is explored. The algorithm relies upon cascading mapping models and makes use of artificially generated whispers (called whisperised speech) to regenerate natural phonated speech from whispers. Using a training-based approach, the mapping models exploit whisperised speech to overcome frame to frame time alignment problems that are inherent in the speech reconstruction process. This algorithm effectively regenerates missing information in the conventional frameworks of phonated speech reconstruction, ∗Corresponding author Email address: hsharifzadeh@unitec.ac.nz (Hamid R. Sharifzadeh) Preprint submitted to Journal of Computers & Electrical Engineering February 15, 2016 and is able to outperform the current state-of-the-art regeneration methods using both subjective and objective criteria.", "title": "" }, { "docid": "ed82ac5cf6cf4173fde52a25c17b86aa", "text": "The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.", "title": "" }, { "docid": "452eee7c8f199ce8ce6d89c14b08ac8f", "text": "Interactional aerodynamics of multi-rotor flows has been studied for a quadcopter representing a generic quad tilt-rotor aircraft in hover. The objective of the present study is to investigate the effects of the separation distances between rotors, and also fuselage and wings on the performance and efficiency of multirotor systems. Three-dimensional unsteady Navier-Stokes equations are solved using a spatially 5 order accurate scheme, dual-time stepping, and the Detached Eddy Simulation turbulence model. The results show that the separation distances as well as the wings have significant effects on the vertical forces of quadroror systems in hover. Understanding interactions in multi-rotor flows would help improve the design of next generation multi-rotor drones.", "title": "" }, { "docid": "86627f7ca48eda4985b979c9b137ba2a", "text": "In this paper we present the TWitterBuonaScuola corpus (TW-BS), a novel Italian linguistic resource for Sentiment Analysis, developed with the main aim of analyzing the online debate on the controversial Italian political reform “Buona Scuola” (Good school), aimed at reorganizing the national educational and training systems. We describe the methodologies applied in the collection and annotation of data. The collection has been driven by the detection of the hashtags mainly used by the participants to the debate, while the annotation has been focused on sentiment polarity and irony, but also extended to mark the aspects of the reform that were mainly discussed in the debate. An in-depth study of the disagreement among annotators is included. We describe the collection and annotation stages, and the in-depth analysis of disagreement made with Crowdflower, a crowdsourcing annotation platform.", "title": "" }, { "docid": "57d5db0feaa35543e15f2417cd4f2db5", "text": "Images are static and lack important depth information about the underlying 3D scenes. We introduce interactive images in the context of man-made environments wherein objects are simple and regular, share various non-local relations (e.g., coplanarity, parallelism, etc.), and are often repeated. Our interactive framework creates partial scene reconstructions based on cuboid-proxies with minimal user interaction. It subsequently allows a range of intuitive image edits mimicking real-world behavior, which are otherwise difficult to achieve. Effectively, the user simply provides high-level semantic hints, while our system ensures plausible operations by conforming to the extracted non-local relations. We demonstrate our system on a range of real-world images and validate the plausibility of the results using a user study.", "title": "" }, { "docid": "c0484f3055d7e7db8dfea9d4483e1e06", "text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.", "title": "" } ]
scidocsrr
db3fa632649ce3300d1397b4b7f5efdc
An Analysis on Time- and Session-aware Diversification in Recommender Systems
[ { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" }, { "docid": "8c07982729ca439c8e346cbe018a7198", "text": "The need for diversification manifests in various recommendation use cases. In this work, we propose a novel approach to diversifying a list of recommended items, which maximizes the utility of the items subject to the increase in their diversity. From a technical perspective, the problem can be viewed as maximization of a modular function on the polytope of a submodular function, which can be solved optimally by a greedy method. We evaluate our approach in an offline analysis, which incorporates a number of baselines and metrics, and in two online user studies. In all the experiments, our method outperforms the baseline methods.", "title": "" }, { "docid": "841a5ecba126006e1deb962473662788", "text": "In the past decade large scale recommendation datasets were published and extensively studied. In this work we describe a detailed analysis of a sparse, large scale dataset, specifically designed to push the envelope of recommender system models. The Yahoo! Music dataset consists of more than a million users, 600 thousand musical items and more than 250 million ratings, collected over a decade. It is characterized by three unique features: First, rated items are multi-typed, including tracks, albums, artists and genres; Second, items are arranged within a four level taxonomy, proving itself effective in coping with a severe sparsity problem that originates from the unusually large number of items (compared to, e.g., movie ratings datasets). Finally, fine resolution timestamps associated with the ratings enable a comprehensive temporal and session analysis. We further present a matrix factorization model exploiting the special characteristics of this dataset. In particular, the model incorporates a rich bias model with terms that capture information from the taxonomy of items and different temporal dynamics of music ratings. To gain additional insights of its properties, we organized the KddCup-2011 competition about this dataset. As the competition drew thousands of participants, we expect the dataset to attract considerable research activity in the future.", "title": "" }, { "docid": "539a25209bf65c8b26cebccf3e083cd0", "text": "We study the problem of web search result diversification in the case where intent based relevance scores are available. A diversified search result will hopefully satisfy the information need of user-L.s who may have different intents. In this context, we first analyze the properties of an intent-based metric, ERR-IA, to measure relevance and diversity altogether. We argue that this is a better metric than some previously proposed intent aware metrics and show that it has a better correlation with abandonment rate. We then propose an algorithm to rerank web search results based on optimizing an objective function corresponding to this metric and evaluate it on shopping related queries.", "title": "" } ]
[ { "docid": "f69723ed73c7edd9856883bbb086ed0c", "text": "An algorithm for license plate recognition (LPR) applied to the intelligent transportation system is proposed on the basis of a novel shadow removal technique and character recognition algorithms. This paper has two major contributions. One contribution is a new binary method, i.e., the shadow removal method, which is based on the improved Bernsen algorithm combined with the Gaussian filter. Our second contribution is a character recognition algorithm known as support vector machine (SVM) integration. In SVM integration, character features are extracted from the elastic mesh, and the entire address character string is taken as the object of study, as opposed to a single character. This paper also presents improved techniques for image tilt correction and image gray enhancement. Our algorithm is robust to the variance of illumination, view angle, position, size, and color of the license plates when working in a complex environment. The algorithm was tested with 9026 images, such as natural-scene vehicle images using different backgrounds and ambient illumination particularly for low-resolution images. The license plates were properly located and segmented as 97.16% and 98.34%, respectively. The optical character recognition system is the SVM integration with different character features, whose performance for numerals, Kana, and address recognition reached 99.5%, 98.6%, and 97.8%, respectively. Combining the preceding tests, the overall performance of success for the license plate achieves 93.54% when the system is used for LPR in various complex conditions.", "title": "" }, { "docid": "a6a98545230e6dd5c87948f5b000a076", "text": "The Traveling Salesman Problem (TSP) is one of the standard test problems used in performance analysis of discrete optimization algorithms. The Ant Colony Optimization (ACO) algorithm appears among heuristic algorithms used for solving discrete optimization problems. In this study, a new hybrid method is proposed to optimize parameters that affect performance of the ACO algorithm using Particle Swarm Optimization (PSO). In addition, 3-Opt heuristic method is added to proposed method in order to improve local solutions. The PSO algorithm is used for detecting optimum values of parameters ̨ and ˇ which are used for city selection operations in the ACO algorithm and determines significance of inter-city pheromone and distances. The 3-Opt algorithm is used for the purpose of improving city selection operations, which could not be improved due to falling in local minimums by the ACO algorithm. The performance of proposed hybrid method is investigated on ten different benchmark problems taken from literature and it is compared to the performance of some well-known algorithms. Experimental results show that the performance of proposed method by using fewer ants than the number of cities for the TSPs is better than the performance of compared methods in most cases in terms of solution quality and robustness. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "75fb9b4adf41c0a93f72084cc3a7444a", "text": "OBJECTIVE\nIn this study, we tested an expanded model of Kanter's structural empowerment, which specified the relationships among structural and psychological empowerment, job strain, and work satisfaction.\n\n\nBACKGROUND\nStrategies proposed in Kanter's empowerment theory have the potential to reduce job strain and improve employee work satisfaction and performance in current restructured healthcare settings. The addition to the model of psychological empowerment as an outcome of structural empowerment provides an understanding of the intervening mechanisms between structural work conditions and important organizational outcomes.\n\n\nMETHODS\nA predictive, nonexperimental design was used to test the model in a random sample of 404 Canadian staff nurses. The Conditions of Work Effectiveness Questionnaire, the Psychological Empowerment Questionnaire, the Job Content Questionnaire, and the Global Satisfaction Scale were used to measure the major study variables.\n\n\nRESULTS\nStructural equation modelling analyses revealed a good fit of the hypothesized model to the data based on various fit indices (chi 2 = 1140, df = 545, chi 2/df ratio = 2.09, CFI = 0.986, RMSEA = 0.050). The amount of variance accounted for in the model was 58%. Staff nurses felt that structural empowerment in their workplace resulted in higher levels of psychological empowerment. These heightened feelings of psychological empowerment in turn strongly influenced job strain and work satisfaction. However, job strain did not have a direct effect on work satisfaction.\n\n\nCONCLUSIONS\nThese results provide initial support for an expanded model of organizational empowerment and offer a broader understanding of the empowerment process.", "title": "" }, { "docid": "f3e5941be4543d5900d56c1a7d93d0ea", "text": "These working notes summarize the different approaches we have explored in order to classify a corpus of tweets related to the 2015 Spanish General Election (COSET 2017 task from IberEval 2017). Two approaches were tested during the COSET 2017 evaluations: Neural Networks with Sentence Embeddings (based on TensorFlow) and N-gram Language Models (based on SRILM). Our results with these approaches were modest: both ranked above the “Most frequent baseline”, but below the “Bag-of-words + SVM” baseline. A third approach was tried after the COSET 2017 evaluation phase was over: Advanced Linear Models (based on fastText). Results measured over the COSET 2017 Dev and Test show that this approach is well above the “TF-IDF+RF” baseline.", "title": "" }, { "docid": "425c96a3ed2d88bbc9324101626c992d", "text": "Nonlocal image representation or group sparsity has attracted considerable interest in various low-level vision tasks and has led to several state-of-the-art image denoising techniques, such as BM3D, learned simultaneous sparse coding. In the past, convex optimization with sparsity-promoting convex regularization was usually regarded as a standard scheme for estimating sparse signals in noise. However, using convex regularization cannot still obtain the correct sparsity solution under some practical problems including image inverse problems. In this letter, we propose a nonconvex weighted <inline-formula><tex-math notation=\"LaTeX\">$\\ell _p$</tex-math></inline-formula> minimization based group sparse representation framework for image denoising. To make the proposed scheme tractable and robust, the generalized soft-thresholding algorithm is adopted to solve the nonconvex <inline-formula><tex-math notation=\"LaTeX\"> $\\ell _p$</tex-math></inline-formula> minimization problem. In addition, to improve the accuracy of the nonlocal similar patch selection, an adaptive patch search scheme is proposed. Experimental results demonstrate that the proposed approach not only outperforms many state-of-the-art denoising methods such as BM3D and weighted nuclear norm minimization, but also results in a competitive speed.", "title": "" }, { "docid": "4dfb5d8dfb09f510427aa6400b1f330f", "text": "In this paper, a permanent magnet synchronous motor for ship propulsion is designed. The appropriate number of poles and slots are selected and the cogging torque is minimized in order to reduce noise and vibrations. To perform high efficiency and reliability, the inverter system consists of multiple modules and the stator coil has multi phases and groups. Because of the modular structure, the motor can be operated with some damaged inverters. In order to maintain high efficiency at low speed operation, same phase coils of different group are connected in series and excited by the half number of inverters than at high speed operation. A MW-class motor is designed and the performances with the proposed inverter control method are calculated.", "title": "" }, { "docid": "be447131554900aaba025be449944613", "text": "Attackers increasingly take advantage of innocent users who tend to casually open email messages assumed to be benign, carrying malicious documents. Recent targeted attacks aimed at organizations utilize the new Microsoft Word documents (*.docx). Anti-virus software fails to detect new unknown malicious files, including malicious docx files. In this paper, we present ALDOCX, a framework aimed at accurate detection of new unknown malicious docx files that also efficiently enhances the framework’s detection capabilities over time. Detection relies upon our new structural feature extraction methodology (SFEM), which is performed statically using meta-features extracted from docx files. Using machine-learning algorithms with SFEM, we created a detection model that successfully detects new unknown malicious docx files. In addition, because it is crucial to maintain the detection model’s updatability and incorporate new malicious files created daily, ALDOCX integrates our active-learning (AL) methods, which are designed to efficiently assist anti-virus vendors by better focusing their experts’ analytical efforts and enhance detection capability. ALDOCX identifies and acquires new docx files that are most likely malicious, as well as informative benign files. These files are used for enhancing the knowledge stores of both the detection model and the anti-virus software. The evaluation results show that by using ALDOCX and SFEM, we achieved a high detection rate of malicious docx files (94.44% TPR) compared with the anti-virus software (85.9% TPR)—with very low FPR rates (0.19%). ALDOCX’s AL methods used only 14% of the labeled docx files, which led to a reduction of 95.5% in security experts’ labeling efforts compared with the passive learning and the support vector machine (SVM)-Margin (existing active-learning method). Our AL methods also showed a significant improvement of 91% in number of unknown docx malware acquired, compared with the passive learning and the SVM-Margin, thus providing an improved updating solution for the detection model, as well as the anti-virus software widely used within organizations.", "title": "" }, { "docid": "19b8acf4e5c68842a02e3250c346d09b", "text": "A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.", "title": "" }, { "docid": "ada7b43edc18b321c57a978d7a3859ae", "text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.", "title": "" }, { "docid": "ffdee20af63d50f39f9cc5077a14dc87", "text": "Recent advancement in remote sensing facilitates collection of hyperspectral images (HSIs) in hundreds of bands which provides a potential platform to detect and identify the unique trends in land and atmospheric datasets with high accuracy. But along with the detailed information, HSIs also pose several processing problems such as1) increase in computational complexity due to high dimensionality. So dimension reduction without losing information is one of the major concerns in this area and 2) limited availability of labeled training sets causes the ill posed problem which is needed to be addressed by the classification algorithms. Initially classification techniques of HSIs were based on spectral information only. Gradually researchers started utilizing both spectral and spatial information to increase classification accuracy. Also the classification algorithms have evolved from supervised to semi supervised mode. This paper presents a survey about the techniques available in the field of HSI processing to provide a seminal view of how the field of HSI analysis has evolved over the last few decades and also provides a snapshot of the state of the art techniques used in this area. General Terms Classification algorithms, image processing, supervised, semi supervised techniques.", "title": "" }, { "docid": "38a1ed4d7147a48758c1a03c5c136457", "text": "The Penrose inequality gives a lower bound for the total mass of a spacetime in terms of the area of suitable surfaces that represent black holes. Its validity is supported by the cosmic censorship conjecture and therefore its proof (or disproof) is an important problem in relation with gravitational collapse. The Penrose inequality is a very challenging problem in mathematical relativity and it has received continuous attention since its formulation by Penrose in the early seventies. Important breakthroughs have been made in the last decade or so, with the complete resolution of the so-called Riemannian Penrose inequality and a very interesting proposal to address the general case by Bray and Khuri. In this paper, the most important results on this field will be discussed and the main ideas behind their proofs will be summarized, with the aim of presenting what is the status of our present knowledge in this topic.", "title": "" }, { "docid": "ebea79abc60a5d55d0397d21f54cc85e", "text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.", "title": "" }, { "docid": "3d1fa2e999a2cc54b3c1ec98d121e9fb", "text": "Model-based design is a powerful design technique for cyber-physical systems, but too often literature assumes knowledge of a methodology without reference to an explicit design process, instead focusing on isolated steps such as simulation, software synthesis, or verification. We combine these steps into an explicit and holistic methodology for model-based design of cyber-physical systems from abstraction to architecture, and from concept to realization. We decompose model-based design into ten fundamental steps, describe and evaluate an iterative design methodology, and evaluate this methodology in the development of a cyber-physical system.", "title": "" }, { "docid": "46ea713c4206d57144350a7871433392", "text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.", "title": "" }, { "docid": "253b2696bb52f43528f02e85d1070e96", "text": "Prosocial behavior consists of behaviors regarded as beneficial to others, including helping, sharing, comforting, guiding, rescuing, and defending others. Although women and men are similar in engaging in extensive prosocial behavior, they are different in their emphasis on particular classes of these behaviors. The specialty of women is prosocial behaviors that are more communal and relational, and that of men is behaviors that are more agentic and collectively oriented as well as strength intensive. These sex differences, which appear in research in various settings, match widely shared gender role beliefs. The origins of these beliefs lie in the division of labor, which reflects a biosocial interaction between male and female physical attributes and the social structure. The effects of gender roles on behavior are mediated by hormonal processes, social expectations, and individual dispositions.", "title": "" }, { "docid": "abed12088956b9b695a0d5a158dc1f71", "text": "Neural encoding of pitch in the auditory brainstem is known to be shaped by long-term experience with language or music, implying that early sensory processing is subject to experience-dependent neural plasticity. In language, pitch patterns consist of sequences of continuous, curvilinear contours; in music, pitch patterns consist of relatively discrete, stair-stepped sequences of notes. The primary aim was to determine the influence of domain-specific experience (language vs. music) on the encoding of pitch in the brainstem. Frequency-following responses were recorded from the brainstem in native Chinese, English amateur musicians, and English nonmusicians in response to iterated rippled noise homologues of a musical pitch interval (major third; M3) and a lexical tone (Mandarin tone 2; T2) from the music and language domains, respectively. Pitch-tracking accuracy (whole contour) and pitch strength (50 msec sections) were computed from the brainstem responses using autocorrelation algorithms. Pitch-tracking accuracy was higher in the Chinese and musicians than in the nonmusicians across domains. Pitch strength was more robust across sections in musicians than in nonmusicians regardless of domain. In contrast, the Chinese showed larger pitch strength, relative to nonmusicians, only in those sections of T2 with rapid changes in pitch. Interestingly, musicians exhibited greater pitch strength than the Chinese in one section of M3, corresponding to the onset of the second musical note, and two sections within T2, corresponding to a note along the diatonic musical scale. We infer that experience-dependent plasticity of brainstem responses is shaped by the relative saliency of acoustic dimensions underlying the pitch patterns associated with a particular domain.", "title": "" }, { "docid": "7d0fb12fce0ef052684a8664a3f5c543", "text": "In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose datadriven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.", "title": "" }, { "docid": "3d0b507f18dca7e2710eab5fdaa9a20b", "text": "This paper is designed to illustrate and consider the relations between three types of metarepresentational ability used in verbal comprehension: the ability to metarepresent attributed thoughts, the ability to metarepresent attributed utterances, and the ability to metarepresent abstract, non-attributed representations (e.g. sentence types, utterance types, propositions). Aspects of these abilities have been separ at ly considered in the literatures on “theory of mind”, Gricean pragmatics and quotation. The aim of this paper is to show how the results of these separate strands of research might be integrated with an empirically plausible pragmatic theory.", "title": "" }, { "docid": "6f845762227f11525173d6d0869f6499", "text": "We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement the Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.", "title": "" }, { "docid": "f37d9a57fd9100323c70876cf7a1d7ad", "text": "Neural networks encounter serious catastrophic forgetting when information is learned sequentially, which is unacceptable for both a model of human memory and practical engineering applications. In this study, we propose a novel biologically inspired dual-network memory model that can significantly reduce catastrophic forgetting. The proposed model consists of two distinct neural networks: hippocampal and neocortical networks. Information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network. In the hippocampal network, chaotic behavior of neurons in the CA3 region of the hippocampus and neuronal turnover in the dentate gyrus region are introduced. Chaotic recall by CA3 enables retrieval of stored information in the hippocampal network. Thereafter, information retrieved from the hippocampal network is interleaved with previously stored information and consolidated by using pseudopatterns in the neocortical network. The computer simulation results show the effectiveness of the proposed dual-network memory model. & 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
2a20d0506afe23b957eba9c9255c9d6b
SVM Based Decision Support System for Heart Disease Classification with Integer-Coded Genetic Algorithm to Select Critical Features
[ { "docid": "c688d24fd8362a16a19f830260386775", "text": "We present a fast iterative algorithm for identifying the Support Vectors of a given set of points. Our algorithm works by maintaining a candidate Support Vector set. It uses a greedy approach to pick points for inclusion in the candidate set. When the addition of a point to the candidate set is blocked because of other points already present in the set we use a backtracking approach to prune away such points. To speed up convergence we initialize our algorithm with the nearest pair of points from opposite classes. We then use an optimization based approach to increment or prune the candidate Support Vector set. The algorithm makes repeated passes over the data to satisfy the KKT constraints. The memory requirements of our algorithm scale as O(|S|) in the average case, where|S| is the size of the Support Vector set. We show that the algorithm is extremely competitive as compared to other conventional iterative algorithms like SMO and the NPA. We present results on a variety of real life datasets to validate our claims.", "title": "" }, { "docid": "1a1268ef30c225740b35ac123650ceb0", "text": "Support Vector Machines, one of the new techniques for pattern classification, have been widely used in many application areas. The kernel parameters setting for SVM in a training process impacts on the classification accuracy. Feature selection is another factor that impacts classification accuracy. The objective of this research is to simultaneously optimize the parameters and feature subset without degrading the SVM classification accuracy. We present a genetic algorithm approach for feature selection and parameters optimization to solve this kind of problem. We tried several real-world datasets using the proposed GA-based approach and the Grid algorithm, a traditional method of performing parameters searching. Compared with the Grid algorithm, our proposed GA-based approach significantly improves the classification accuracy and has fewer input features for support vector machines. q 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "95dbebf3ed125e2a4f0d901f42f09be3", "text": "Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ~ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz.", "title": "" }, { "docid": "8621fff78e92e1e0e9ba898d5e2433ca", "text": "This paper aims at providing insight on the transferability of deep CNN features to unsupervised problems. We study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification. We propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images. This approach is compared to state-of-the-art algorithms in image-clustering and provides better results. These results strengthen the belief that supervised training of deep CNN on large datasets, with a large variability of classes, extracts better features than most carefully designed engineering approaches, even for unsupervised tasks. We also validate our approach on a robotic application, consisting in sorting and storing objects smartly based on clustering.", "title": "" }, { "docid": "f03e2e50acb9650099c15cdd88f525d9", "text": "Social network research has begun to take advantage of finegrained communications regarding coordination, decisionmaking, and knowledge sharing. These studies, however, have not generally analyzed how external events are associated with a social network’s structure and communicative properties. Here, we study how external events are associated with a network’s change in structure and communications. Analyzing a complete dataset of millions of instant messages among the decision-makers in a large hedge fund and their network of outside contacts, we investigate the link between price shocks, network structure, and change in the affect and cognition of decision-makers embedded in the network. When price shocks occur the communication network tends not to display structural changes associated with adaptiveness. Rather, the network “turtles up”. It displays a propensity for higher clustering, strong tie interaction, and an intensification of insider vs. outsider communication. Further, we find changes in network structure predict shifts in cognitive and affective processes, execution of new transactions, and local optimality of transactions better than prices, revealing the important predictive relationship between network structure and collective behavior within a social network.", "title": "" }, { "docid": "7a2d4032d79659a70ed2f8a6b75c4e71", "text": "In recent years, transition-based parsers have shown promise in terms of efficiency and accuracy. Though these parsers have been extensively explored for multiple Indian languages, there is still considerable scope for improvement by properly incorporating syntactically relevant information. In this article, we enhance transition-based parsing of Hindi and Urdu by redefining the features and feature extraction procedures that have been previously proposed in the parsing literature of Indian languages. We propose and empirically show that properly incorporating syntactically relevant information like case marking, complex predication and grammatical agreement in an arc-eager parsing model can significantly improve parsing accuracy. Our experiments show an absolute improvement of ∼2% LAS for parsing of both Hindi and Urdu over a competitive baseline which uses rich features like part-of-speech (POS) tags, chunk tags, cluster ids and lemmas. We also propose some heuristics to identify ezafe constructions in Urdu texts which show promising results in parsing these constructions.", "title": "" }, { "docid": "6c88c8723d54262ae5839302bd3ded5a", "text": "This paper surveys the reduced common-mode voltage pulsewidth modulation (RCMV-PWM) methods for three-phase voltage-source inverters, investigates their performance characteristics, and provides a comparison with the standard PWM methods. PWM methods are reviewed, and their pulse patterns and common-mode voltage (CMV) patterns are illustrated. The inverter input and output current ripple characteristics and output voltage linearity characteristics of each PWM method are thoroughly investigated by analytical methods, simulations, and experiments. The research results illustrate the advantages and disadvantages of the considered methods, and suggest the utilization of the near-state PWM and active zero state PWM1 methods as overall superior methods. The paper aids in the selection and application of appropriate PWM methods in inverter drives with low CMV requirements.", "title": "" }, { "docid": "a55dd930b34c0d7fce69d8e7f108dfa7", "text": "EduSummIT 2013 featured a working group that examined digital citizenship within a global context. Group members recognized that, given today’s international, regional, political, and social dynamics, the notion of “global” might be more aspirational than practical. The development of informed policies and practices serving and involving as many sectors of society as possible is desirable since a growing world’s population, including students in classrooms, will have continued access to the Internet, mobile devices and social media. Action steps to guide technology integration into educational settings must address the following factors: national and local policies, bandwidth and technology infrastructure, educational contexts, cyber-safety and cyberwellness practices and privacy accountability. Finally, in the process of developing and implementing positive and productive solutions, as many key members and stakeholders as possible who share in—and benefit from—students’ digital lives should be involved, from families and educators to law enforcement authorities, from telecommunication organizations to local, provincial and national leaders.", "title": "" }, { "docid": "0347347608738b966ca4a62dfb37fdd7", "text": "Much of the work done in the field of tangible interaction has focused on creating tools for learning; however, in many cases, little evidence has been provided that tangible interfaces offer educational benefits compared to more conventional interaction techniques. In this paper, we present a study comparing the use of a tangible and a graphical interface as part of an interactive computer programming and robotics exhibit that we designed for the Boston Museum of Science. In this study, we have collected observations of 260 museum visitors and conducted interviews with 13 family groups. Our results show that visitors found the tangible and the graphical systems equally easy to understand. However, with the tangible interface, visitors were significantly more likely to try the exhibit and significantly more likely to actively participate in groups. In turn, we show that regardless of the condition, involving multiple active participants leads to significantly longer interaction times. Finally, we examine the role of children and adults in each condition and present evidence that children are more actively involved in the tangible condition, an effect that seems to be especially strong for girls.", "title": "" }, { "docid": "6fe413cf75a694217c30a9ef79fab589", "text": "Zusammenfassung) Biometrics have been used for secure identification and authentication for more than two decades since biometric data is unique, non-transferable, unforgettable, and always with us. Recently, biometrics has pervaded other aspects of security applications that can be listed under the topic of “Biometric Cryptosystems”. Although the security of some of these systems is questionable when they are utilized alone, integration with other technologies such as digital signatures or Identity Based Encryption (IBE) schemes results in cryptographically secure applications of biometrics. It is exactly this field of biometric cryptosystems that we focused in this thesis. In particular, our goal is to design cryptographic protocols for biometrics in the framework of a realistic security model with a security reduction. Our protocols are designed for biometric based encryption, signature and remote authentication. We first analyze the recently introduced biometric remote authentication schemes designed according to the security model of Bringer et al.. In this model, we show that one can improve the database storage cost significantly by designing a new architecture, which is a two-factor authentication protocol. This construction is also secure against the new attacks we present, which disprove the claimed security of remote authentication schemes, in particular the ones requiring a secure sketch. Thus, we introduce a new notion called “Weak-identity Privacy” and propose a new construction by combining cancelable biometrics and distributed remote authentication in order to obtain a highly secure biometric authentication system. We continue our research on biometric remote authentication by analyzing the security issues of multi-factor biometric authentication (MFBA). We formally describe the security model for MFBA that captures simultaneous attacks against these systems and define the notion of user privacy, where the goal of the adversary is to impersonate a client to the server. We design a new protocol by combining bipartite biotokens, homomorphic encryption and zero-knowledge proofs and provide a security reduction to achieve user privacy. The main difference of this MFBA protocol is that the server-side computations are performed in the encrypted domain but without requiring a decryption key for the authentication decision of the server. Thus, leakage of the secret key of any system component does not affect the security of the scheme as opposed to the current biometric systems involving crypto-", "title": "" }, { "docid": "ccafd3340850c5c1a4dfbedd411f1d62", "text": "The paper predicts changes in global and regional incidences of armed conflict for the 2010–2050 period. The predictions are based on a dynamic multinomial logit model estimation on a 1970–2009 cross-sectional dataset of changes between no armed conflict, minor conflict, and major conflict. Core exogenous predictors are population size, infant mortality rates, demographic composition, education levels, oil dependence, ethnic cleavages, and neighborhood characteristics. Predictions are obtained through simulating the behavior of the conflict variable implied by the estimates from this model. We use projections for the 2011–2050 period for the predictors from the UN World Population Prospects and the International Institute for Applied Systems Analysis. We treat conflicts, recent conflict history, and neighboring conflicts as endogenous variables. Out-of-sample validation of predictions for 2007–2009 (based on estimates for the 1970–2000 period) indicates that the model predicts well, with an AUC of 0.937. Using a p > 0.30 threshold for positive prediction, the True Positive Rate 7–9 years into the future is 0.79 and the False Positive Rate 0.085. We predict a continued decline in the proportion of the world’s countries that have internal armed conflict, from about 15% in 2009 to 7% in 2050. The decline is particularly strong in the Western Asia and North Africa region, and less clear in Africa South of Sahara. The remaining conflict countries will increasingly be concentrated in East, Central, and Southern Africa and in East and South Asia. ∗An earlier version of this paper was presented to the ISA Annual Convention 2009, New York, 15–18 Feb. The research was funded by the Norwegian Research Council grant no. 163115/V10. Thanks to Ken Benoit, Mike Colaresi, Scott Gates, Nils Petter Gleditsch, Joe Hewitt, Bjørn Høyland, Andy Mack, Näıma Mouhleb, Gerald Schneider, and Phil Schrodt for valuable comments.", "title": "" }, { "docid": "158cdd1c7740f30ec87e10a19171721b", "text": "The current practice of physical diagnosis is dependent on physician skills and biases, inductive reasoning, and time efficiency. Although the clinical utility of echocardiography is well known, few data exist on how to integrate 2-dimensional screening \"quick-look\" ultrasound applications into a novel, modernized cardiac physical examination. We discuss the evidence basis behind ultrasound \"signs\" pertinent to the cardiovascular system and elemental in synthesis of bedside diagnoses and propose the application of a brief cardiac limited ultrasound examination based on these signs. An ultrasound-augmented cardiac physical examination can be taught in traditional medical education and has the potential to improve bedside diagnosis and patient care.", "title": "" }, { "docid": "7aca3e7f9409fa1381a309d304eb898d", "text": "The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%.", "title": "" }, { "docid": "8d8db8a8cf9dee121cb93e92577a03ea", "text": "Nowadays, non-photorealistic rendering is an area in computer graphics that tries to simulate what artists do and the tools they use. Stippling illustrations with felt-tipped colour pen is not a commonly used technique by artists due to its complexity. In this paper we present a new method to simulate stippling illustrations with felt-tipped colour pen from a photograph or an image. This method infers a probability function with an expert system from some rules given by the artist and then simulates the behaviour of the artist when placing the dots on the illustration by means of a stochastic algorithm.", "title": "" }, { "docid": "5474d000acf6c20708ed73b5a7e38a0b", "text": "The primary objective of the research is to estimate the dependence between hair mercury content, hair selenium, mercury-to-selenium ratio, serum lipid spectrum, and gamma-glutamyl transferase (GGT) activity in 63 adults (40 men and 23 women). Serum triglyceride (TG) concentration in the high-mercury group significantly exceeded the values obtained for low- and medium-mercury groups by 72 and 42 %, respectively. Serum GGT activity in the examinees from high-Hg group significantly exceeded the values of the first and the second groups by 75 and 28 %, respectively. Statistical analysis of the male sample revealed similar dependences. Surprisingly, no significant changes in the parameters analyzed were detected in the female sample. In all analyzed samples, hair mercury was not associated with hair selenium concentrations. Significant correlation between hair mercury content and serum TG concentration (r = 0.531) and GGT activity (r = 0.524) in the general sample of the examinees was detected. The respective correlations were observed in the male sample. Hair mercury-to-selenium ratios significantly correlated with body weight (r = 0.310), body mass index (r = 0.250), serum TG (r = 0.389), atherogenic index (r = 0.257), and GGT activity (r = 0.393). The same correlations were observed in the male sample. Hg/Se ratio in women did not correlate with the analyzed parameters. Generally, the results of the current study show the following: (1) hair mercury is associated with serum TG concentration and GGT activity in men, (2) hair selenium content is not related to hair mercury concentration, and (3) mercury-to-selenium ratio correlates with lipid spectrum parameters and GGT activity.", "title": "" }, { "docid": "ce3d81c74ef3918222ad7d2e2408bdb0", "text": "This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination. Research in this area uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.\nA key insight of the framework presented here is that coordination can be seen as the process of managing dependencies among activities. Further progress, therefore, should be possible by characterizing different kinds of dependencies and identifying the coordination processes that can be used to manage them. A variety of processes are analyzed from this perspective, and commonalities across disciplines are identified. Processes analyzed include those for managing shared resources, producer/consumer relationships, simultaneity constraints, and task/subtask dependencies.\nSection 3 summarizes ways of applying a coordination perspective in three different domains:(1) understanding the effects of information technology on human organizations and markets, (2) designing cooperative work tools, and (3) designing distributed and parallel computer systems. In the final section, elements of a research agenda in this new area are briefly outlined.", "title": "" }, { "docid": "368a37e8247d8a6f446b31f1dc0f635e", "text": "In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.", "title": "" }, { "docid": "9a90164fb1f41bb36966487f86988f77", "text": "Coordination is important in software development because it leads to benefi ts such as cost savings, shorter development cycles, and better-integrated products. Team cognition research suggests that members coordinate through team knowledge, but this perspective has only been investigated in real-time collocated tasks and we know little about which types of team knowledge best help coordination in the most geographically distributed software work. In this fi eld study, we investigate the coordination needs of software teams, how team knowledge affects coordination, and how this effect is infl uenced by geographic dispersion. Our fi ndings show that software teams have three distinct types of coordination needs—technical, temporal, and process—and that these needs vary with the members’ role; geographic distance has a negative effect on coordination, but is mitigated by shared knowledge of the team and presence awareness; and shared task knowledge is more important for coordination among collocated members. We articulate propositions for future research in this area based on our analysis.", "title": "" }, { "docid": "409d104fa3e992ac72c65b004beaa963", "text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.", "title": "" }, { "docid": "0560c6e9f4de466cc5fcef9b1eba11ce", "text": "Current methods for estimating force from tactile sensor signals are either inaccurate analytic models or taskspecific learned models. In this paper, we explore learning a robust model that maps tactile sensor signals to force. We specifically explore learning a mapping for the SynTouch BioTac sensor via neural networks. We propose a voxelized input feature layer for spatial signals and leverage information about the sensor surface to regularize the loss function. To learn a robust tactile force model that transfers across tasks, we generate ground truth data from three different sources: (1) the BioTac rigidly mounted to a force torque (FT) sensor, (2) a robot interacting with a ball rigidly attached to the same FT sensor, and (3) through force inference on a planar pushing task by formalizing the mechanics as a system of particles and optimizing over the object motion. A total of 140k samples were collected from the three sources. We achieve a median angular accuracy of 3.5 degrees in predicting force direction (66% improvement over the current state of the art) and a median magnitude accuracy of 0.06 N (93% improvement) on a test dataset. Additionally, we evaluate the learned force model in a force feedback grasp controller performing object lifting and gentle placement. Our results can be found on https://sites.google.com/view/tactile-force. I. MOTIVATION & RELATED WORK Tactile perception is an important modality, enabling robots to gain critical information for safe interaction in the physical world [1–3]. The advent of sophisticated tactile sensors [4] with high fidelity signals allows for inferring varied information such as object identity and pose, surface texture, and slip between the object and robot [5–13]. However, using these sensors for force feedback control has been limited 1 NVIDIA, USA. 2 University of Utah Robotics Center and the School of Computing, University of Utah, Salt Lake City, UT, USA. bala@cs.utah.edu 3 Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, GA, USA. 4 University of Washington, Paul G. Allen School for Comupter Science & Engineering, Seattle, WA, USA to simple incremental controllers conditioned on detection of salient events (e.g., slip or contact) [10, 14] or learning taskspecific feedback policies on the tactile signals [15–17]. One limiting factor has been the inaccuracy of functions to map the tactile signals to force robustly across different tasks. Current methods for force estimation on the SynTouch BioTac [18] fail to cover the entire range of forces applied during typical manipulation tasks. Analytic methods [19, 20] tend to produce very noisy estimates at small force values and their accuracy decreases as the imparted force angle relative to the sensor surface normal becomes large (i.e., a large shear component relative to the compression force). On the other hand, learned force models [21, 22] tend to overfit to the dataset used in training and have not been sufficiently validated in predicting force across varied tasks. More specifically, Wettel and Loeb [21] use machine learning techniques to estimate the force, contact location, and object curvature when a tactile sensor interacts with an object. Lin et al. [19] improve upon [21], formulating analytic functions for estimation of the contact point, force, and torque from the BioTac sensor readings. Navarro et al. [20] explore calibration of the force magnitude estimates by recording the DC pressure signal when the sensor is in contact with a force plate. They use these values in a linear least squares formulation to estimate the gain. While they can estimate the magnitude of force, they cannot estimate force direction. Su et al. [22] explore using feed-forward neural networks to learn a model that maps BioTac signals to force estimates. The neural network more accurately estimates forces than the linear model from [19] and is used to perform grasp stabilization. Importantly, none of these methods validate their force estimates using a data source different from the method used to generate the training data. They also lack experimental comparison between different approaches in the context of robotic manipulation tasks. In this paper, we attempt to address these shortcomings, by collecting a large scale ground truth dataset from different methods and by leveraging the sensor surface and spatial information in our proposed neural network architecture. For one of our collection methods, we infer force from the motion of an object on a planar surface, by formalizing the interaction as a system of particles, a deviation from the well-established velocity model for planar pushing [23] which does not reason about force magnitude. This scheme of force estimation allows us to obtain accurate small-scale forces (0.1-2N), enabling us to learn a precise force prediction model. Motivated by [24], we compare our proposed method with the current state-of-the-art methods for force estimation for the BioTac sensor. We specifically compare the analytic model from [19] and the best performing feed-forward neural network model from [22]. We compare both in terms of force estimation accuracy on our dataset and also empirical experiments on a robot manipulation task. To summarize, this paper makes the following contributions: 1) We provide a novel method to infer force from object motion on a planar surface by formalizing the mechanics as a system of particles and solving for the force in a least squares minimization problem, given the object motion and the point on the object where the force is imparted. 2) We introduce a novel 3D voxel grid, neural network encoding of tactile signals enabling the network to better leverage spatial relations in the signal. We further tailor our learning to the tactile sensor through the introduction of a novel loss function used in training that scales the loss as a function of the angular distance between the imparted force and the surface normal. 3) We collected a large-scale dataset for the BioTac sensor, consisting of over 600 pushing episodes and 200 interactions between an arm-hand system equipped with the BioTac sensors and a force torque sensor. We validate these contributions on our dataset and in an autonomous pick and place task. We show that our proposed method robustly learns a model to estimate forces from the BioTac tactile signals that generalize across multiple robot tasks. Our method improves upon the state of the art [19, 22] in tactile force estimation for the BioTac sensor achieving a median angular accuracy of 3.5 degrees in predicting force direction (66% improvement over the current state of the art) and a median magnitude accuracy of 0.06 N (93% improvement) on a test dataset. II. PROBLEM DEFINITION & PROPOSED APPROACH We describe the sensor’s states in the following section, followed by a formal definition of the problem. We then describe the computation of ground truth force from planar pushing in Sec. II-C and our network architecture in Sec. II-D.", "title": "" }, { "docid": "5da2747dd2c3fe5263d8bfba6e23de1f", "text": "We propose to transfer the content of a text written in a certain style to an alternative text written in a different style, while maintaining as much as possible of the original meaning. Our work is inspired by recent progress of applying style transfer to images, as well as attempts to replicate the results to text. Our model is a deep neural network based on Generative Adversarial Networks (GAN). Our novelty is replacing the discrete next-word prediction with prediction in the embedding space, which provides two benefits (1) train the GAN without using gradient approximations and (2) provide semantically related results even for failure cases.", "title": "" }, { "docid": "88d2fd675e5d0a53ff0834505a438164", "text": "BACKGROUND\nMany healthcare organizations have implemented adverse event reporting systems in the hope of learning from experience to prevent adverse events and medical errors. However, a number of these applications have failed or not been implemented as predicted.\n\n\nOBJECTIVE\nThis study presents an extended technology acceptance model that integrates variables connoting trust and management support into the model to investigate what determines acceptance of adverse event reporting systems by healthcare professionals.\n\n\nMETHOD\nThe proposed model was empirically tested using data collected from a survey in the hospital environment. A confirmatory factor analysis was performed to examine the reliability and validity of the measurement model, and a structural equation modeling technique was used to evaluate the causal model.\n\n\nRESULTS\nThe results indicated that perceived usefulness, perceived ease of use, subjective norm, and trust had a significant effect on a professional's intention to use an adverse event reporting system. Among them, subjective norm had the most contribution (total effect). Perceived ease of use and subjective norm also had a direct effect on perceived usefulness and trust, respectively. Management support had a direct effect on perceived usefulness, perceived ease of use, and subjective norm.\n\n\nCONCLUSION\nThe proposed model provides a means to understand what factors determine the behavioral intention of healthcare professionals to use an adverse event reporting system and how this may affect future use. In addition, understanding the factors contributing to behavioral intent may potentially be used in advance of system development to predict reporting systems acceptance.", "title": "" } ]
scidocsrr
f3965f9c66c57f297199d82c30c1cf3c
Data analysis of Li-Ion and lead acid batteries discharge parameters with Simulink-MATLAB
[ { "docid": "5208762a8142de095c21824b0a395b52", "text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.", "title": "" } ]
[ { "docid": "c355dc8d0ec6b673cea3f2ab39d13701", "text": "Errors in estimating and forecasting often result from the failure to collect and consider enough relevant information. We examine whether attributes associated with persistence in information acquisition can predict performance in an estimation task. We focus on actively open-minded thinking (AOT), need for cognition, grit, and the tendency to maximize or satisfice when making decisions. In three studies, participants made estimates and predictions of uncertain quantities, with varying levels of control over the amount of information they could collect before estimating. Only AOT predicted performance. This relationship was mediated by information acquisition: AOT predicted the tendency to collect information, and information acquisition predicted performance. To the extent that available information is predictive of future outcomes, actively open-minded thinkers are more likely than others to make accurate forecasts.", "title": "" }, { "docid": "d0c8e58e06037d065944fc59b0bd7a74", "text": "We propose a new discrete choice model that generalizes the random utility model (RUM). We show that this model, called the Generalized Stochastic Preference (GSP) model can explain several choice phenomena that can’t be represented by a RUM. In particular, the model can easily (and also exactly) replicate some well known examples that are not RUM, as well as controlled choice experiments carried out since 1980’s that possess strong regularity violations. One of such regularity violation is the decoy effect in which the probability of choosing a product increases when a similar, but inferior product is added to the choice set. An appealing feature of the GSP is that it is non-parametric and therefore it has very high flexibility. The model has also a simple description and interpretation: it builds upon the well known representation of RUM as a stochastic preference, by allowing some additional consumer types to be non-rational.", "title": "" }, { "docid": "3a31192482674f400e6230f35c7bfe38", "text": "This paper introduces Parsing to Programs, a framework that combines ideas from parsing and probabilistic programming for situated question answering. As a case study, we build a system that solves pre-university level Newtonian physics questions. Our approach represents domain knowledge of Newtonian physics as programs. When presented with a novel question, the system learns a formal representation of the question by combining interpretations from the question text and any associated diagram. Finally, the system uses this formal representation to solve the questions using the domain knowledge. We collect a new dataset of Newtonian physics questions from a number of textbooks and use it to train our system. The system achieves near human performance on held-out textbook questions and section 1 of AP Physics C mechanics - both on practice questions as well as on freely available actual exams held in 1998 and 2012.", "title": "" }, { "docid": "b912b32d9f1f4e7a5067450b98870a71", "text": "As of May 2013, 56 percent of American adults had a smartphone, and most of them used it to access the Internet. One-third of smartphone users report that their phone is the primary way they go online. Just as the Internet changed retailing in the late 1990s, many argue that the transition to mobile, sometimes referred to as “Web 3.0,” will have a similarly disruptive effect (Brynjolfsson et al. 2013). In this paper, we aim to document some early effects of how mobile devices might change Internet and retail commerce. We present three main findings based on an analysis of eBay’s mobile shopping application and core Internet platform. First, and not surprisingly, the early adopters of mobile e-commerce applications appear", "title": "" }, { "docid": "e42192f9d4d33f92939a04361e1bb706", "text": "Today bone fractures are very common in our country because of road accidents or through other injuries. The X-Ray images are the most common accessibility of peoples during the accidents. But the minute fracture detection in X-Ray image is not possible due to low resolution and quality of the original X-Ray image. The complexity of bone structure and the difference in visual characteristics of fracture by their location. So it is difficult to accurately detect and locate the fractures also determine the severity of the injury. The automatic detection of fractures in X-Ray images is a significant contribution for assisting the physicians in making faster and more accurate patient diagnostic decisions and treatment planning. In this paper, an automatic hierarchical algorithm for detecting bone fracture in X-Ray image is proposed. It uses the Gray level cooccurrence matrix for detecting the fracture. The results are promising, demonstrating that the proposed method is capable of automatically detecting both major and minor fractures accurately, and shows potential for clinical application. Statistical results also indicate the superiority of the proposed methods compared to other techniques. This paper examines the development of such a system, for the detection of long-bone fractures. This project fully employed MATLAB 7.8.0 (.r2009a) as the programming tool for loading image, image processing and user interface development. Results obtained demonstrate the performance of the pelvic bone fracture detection system with some limitations.", "title": "" }, { "docid": "84d4d99ad90c4d05b827f4dde7f07d52", "text": "Diffusions of new products and technologies through social networks can be formalized as spreading of infectious diseases. However, while epidemiological models describe infection in terms of transmissibility, we propose a diffusion model that explicitly includes consumer decision-making affected by social influences and word-of-mouth processes. In our agent-based model consumers’ probability of adoption depends on the external marketing effort and on the internal influence that each consumer perceives in his/her personal networks. Maintaining a given marketing effort and assuming its effect on the probability of adoption as linear, we can study how social processes affect diffusion dynamics and how the speed of the diffusion depends on the network structure and on consumer heterogeneity. First, we show that the speed of diffusion changes with the degree of randomness in the network. In markets with high social influence and in which consumers have a sufficiently large local network, the speed is low in regular networks, it increases in small-world networks and, contrarily to what epidemic models suggest, it becomes very low again in random networks. Second, we show that heterogeneity helps the diffusion. Ceteris paribus and varying the degree of heterogeneity in the population of agents simulation results show that the S. A. Delre ( ) . W. Jager Faculty of Management and Organization, Department of Marketing, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands e-mail: s.a.delre@rug.nl W. Jager e-mail: w.jager@rug.nl M. A. Janssen School of Human Evolution and Social Change & Department of Computer Science and Engineering, Arizona State University, Box 872402, Tempe, AZ 85287-2402 e-mail: Marco.Janssen@asu.edu", "title": "" }, { "docid": "903b68096d2559f0e50c38387260b9c8", "text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.", "title": "" }, { "docid": "34e2eafd055e097e167afe7cb244f99b", "text": "This paper describes the functional verification effort during a specific hardware development program that included three of the largest ASICs designed at Nortel. These devices marked a transition point in methodology as verification took front and centre on the critical path of the ASIC schedule. Both the simulation and emulation strategies are presented. The simulation methodology introduced new techniques such as ASIC sub-system level behavioural modeling, large multi-chip simulations, and random pattern simulations. The emulation strategy was based on a plan that consisted of integrating parts of the real software on the emulated system. This paper describes how these technologies were deployed, analyzes the bugs that were found and highlights the bottlenecks in functional verification as systems become more complex.", "title": "" }, { "docid": "19ff822c54e6aee920a4a63243d07839", "text": "Noma is an opportunistic infection promoted by extreme poverty. It evolves rapidly from a gingival inflammation to grotesque orofacial gangrene. It occurs worldwide, but is most common in sub-Saharan Africa. The peak incidence of acute noma is at ages 1-4 years, coinciding with the period of linear growth retardation in deprived children. Noma is a scourge in communities with poor environmental sanitation. It results from complex interactions between malnutrition, infections, and compromised immunity. Diseases that commonly precede noma include measles, malaria, severe diarrhoea, and necrotising ulcerative gingivitis. The acute stage responds readily to antibiotic treatment. The sequelae after healing include variable functional and aesthetic impairments, which require reconstructive surgery. Noma can be prevented through promotion of national awareness of the disease, poverty reduction, improved nutrition, promotion of exclusive breastfeeding in the first 3-6 months of life, optimum prenatal care, and timely immunisations against the common childhood diseases.", "title": "" }, { "docid": "86fd3a2dd99b85f6de59dca495375565", "text": "To help elderly and physically disabled people to become self-reliant in daily life such as at home or a health clinic, we have developed a network-type brain machine interface (BMI) system called “network BMI” to control real-world actuators like wheelchairs based on human intention measured by a portable brain measurement system. In this paper, we introduce the technologies for achieving the network BMI system to support activities of daily living. key words: brain machine interface, smart house, data analysis, network agent", "title": "" }, { "docid": "48019a3106c6d74e4cfcc5ac596d4617", "text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.", "title": "" }, { "docid": "a78782e389313600620bfb68fc57a81f", "text": "Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5–9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems.", "title": "" }, { "docid": "6f99c3fe7d99aa7f00a3e3eb8856db97", "text": "The 3-D modeling technique presented in this paper, predicts, with high accuracy, electromagnetic fields and corresponding dynamic effects in conducting regions for rotating machines with slotless windings, e.g., self-supporting windings. The presented modeling approach can be applied to a wide variety of slotless winding configurations, including skewing and/or different winding shapes. It is capable to account for induced eddy currents in the conductive rotor parts, e.g., permanent-magnet (PM) eddy-current losses, albeit not iron, and winding ac losses. The specific focus of this paper is to provide the reader with the complete implementation and assumptions details of such a 3-D semianalytical approach, which allows model validations with relatively short calculation times. This model can be used to improve future design optimizations for machines with 3-D slotless windings. It has been applied, in this paper, to calculate fixed parameter Faulhaber, rhombic, and diamond slotless PM machines to illustrate accuracy and applicability.", "title": "" }, { "docid": "1cbf4840e09a950a5adfcbbfbd476d6a", "text": "We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read. By independently tracking the encoding and decoding representations our algorithm permits exact polynomial marginalization of the latent segmentation during training, and during decoding beam search is employed to find the best alignment path together with the predicted output sequence. Our model tackles the bottleneck of vanilla encoder-decoders that have to read and memorize the entire input sequence in their fixedlength hidden states before producing any output. It is different from previous attentive models in that, instead of treating the attention weights as output of a deterministic function, our model assigns attention weights to a sequential latent variable which can be marginalized out and permits online generation. Experiments on abstractive sentence summarization and morphological inflection show significant performance gains over the baseline encoder-decoders.", "title": "" }, { "docid": "d2401987609efcb5a7fe420d48dfec1b", "text": "Good sparse approximations are essential for practical inference in Gaussian Processes as the computational cost of exact methods is prohibitive for large datasets. The Fully Independent Training Conditional (FITC) and the Variational Free Energy (VFE) approximations are two recent popular methods. Despite superficial similarities, these approximations have surprisingly different theoretical properties and behave differently in practice. We thoroughly investigate the two methods for regression both analytically and through illustrative examples, and draw conclusions to guide practical application.", "title": "" }, { "docid": "31b449b209beaadbbcc36c485517c3cf", "text": "While a number of information visualization software frameworks exist, creating new visualizations, especially those that involve novel visualization metaphors, interaction techniques, data analysis strategies, and specialized rendering algorithms, is still often a difficult process. To facilitate the creation of novel visualizations we present a new software framework, behaviorism, which provides a wide range of flexibility when working with dynamic information on visual, temporal, and ontological levels, but at the same time providing appropriate abstractions which allow developers to create prototypes quickly which can then easily be turned into robust systems. The core of the framework is a set of three interconnected graphs, each with associated operators: a scene graph for high-performance 3D rendering, a data graph for different layers of semantically-linked heterogeneous data, and a timing graph for sophisticated control of scheduling, interaction, and animation. In particular, the timing graph provides a unified system to add behaviors to both data and visual elements, as well as to the behaviors themselves. To evaluate the framework we look briefly at three different projects all of which required novel visualizations in different domains, and all of which worked with dynamic data in different ways: an interactive ecological simulation, an information art installation, and an information visualization technique.", "title": "" }, { "docid": "b37064e74a2c88507eacb9062996a911", "text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.", "title": "" }, { "docid": "e3823047ccc723783cf05f24ca60d449", "text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.", "title": "" }, { "docid": "5ccb3ab32054741928b8b93eea7a9ce2", "text": "A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given.", "title": "" } ]
scidocsrr
90cd8c386fa424bedca4491052232790
A simple probabilistic deep generative model for learning generalizable disentangled representations from grouped data
[ { "docid": "98d3dddfca32c442f6b7c0a6da57e690", "text": "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce β-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter β that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that β-VAE with appropriately tuned β > 1 qualitatively outperforms VAE (β = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, β-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter β, which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.", "title": "" }, { "docid": "ebee9e3ab7fe1a0eb5da28793874e309", "text": "We introduce a conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes. One code summarizes the specified factors of variation associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class. Examples of such observations include images of a set of labeled objects captured at different viewpoints, or recordings of set of speakers dictating multiple phrases. In both instances, the intra-class diversity is the source of the unspecified factors of variation: each object is observed at multiple viewpoints, and each speaker dictates multiple phrases. Learning to disentangle the specified factors from the unspecified ones becomes easier when strong supervision is possible. Suppose that during training, we have access to pairs of images, where each pair shows two different objects captured from the same viewpoint. This source of alignment allows us to solve our task using existing methods. However, labels for the unspecified factors are usually unavailable in realistic scenarios where data acquisition is not strictly controlled. We address the problem of disentaglement in this more general setting by combining deep convolutional autoencoders with a form of adversarial training. Both factors of variation are implicitly captured in the organization of the learned embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable of generalizing to unseen classes and intra-class variabilities.", "title": "" }, { "docid": "43f9e6edee92ddd0b9dfff885b69f64d", "text": "In this paper, we present a scalable and exact solution for probabilistic linear discriminant analysis (PLDA). PLDA is a probabilistic model that has been shown to provide state-of-the-art performance for both face and speaker recognition. However, it has one major drawback: At training time estimating the latent variables requires the inversion and storage of a matrix whose size grows quadratically with the number of samples for the identity (class). To date, two approaches have been taken to deal with this problem, to 1) use an exact solution that calculates this large matrix and is obviously not scalable with the number of samples or 2) derive a variational approximation to the problem. We present a scalable derivation which is theoretically equivalent to the previous nonscalable solution and thus obviates the need for a variational approximation. Experimentally, we demonstrate the efficacy of our approach in two ways. First, on labeled faces in the wild, we illustrate the equivalence of our scalable implementation with previously published work. Second, on the large Multi-PIE database, we illustrate the gain in performance when using more training samples per identity (class), which is made possible by the proposed scalable formulation of PLDA.", "title": "" } ]
[ { "docid": "13d9b338b83a5fcf75f74607bf7428a7", "text": "We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies, including both linear and nonlinear ones. We implement the D-NTM with both continuous and discrete read and write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU controller. We provide extensive analysis of our model and compare different variations of neural Turing machines on this task. We show that our model outperforms long short-term memory and NTM variants. We provide further experimental results on the sequential MNIST, Stanford Natural Language Inference, associative recall, and copy tasks.", "title": "" }, { "docid": "4fb391446ca62dc2aa52ce905d92b036", "text": "The frequency and intensity of natural disasters has increased significantly in recent decades, and this trend is expected to continue. Hence, understanding and predicting human evacuation behavior and mobility will play a vital role in planning effective humanitarian relief, disaster management, and long-term societal reconstruction. However, existing models are shallow models, and it is difficult to apply them for understanding the “deep knowledge” of human mobility. Therefore, in this study, we collect big and heterogeneous data (e.g., GPS records of 1.6 million users over 3 years, data on earthquakes that have occurred in Japan over 4 years, news report data, and transportation network data), and we build an intelligent system, namely, DeepMob, for understanding and predicting human evacuation behavior and mobility following different types of natural disasters. The key component of DeepMob is based on a deep learning architecture that aims to understand the basic laws that govern human behavior and mobility following natural disasters, from big and heterogeneous data. Furthermore, based on the deep learning model, DeepMob can accurately predict or simulate a person’s future evacuation behaviors or evacuation routes under different disaster conditions. Experimental results and validations demonstrate the efficiency and superior performance of our system, and suggest that human mobility following disasters may be predicted and simulated more easily than previously thought.", "title": "" }, { "docid": "60cbe9d8e1cbc5dd87c8f438cc766a0b", "text": "Drosophila mounts a potent host defence when challenged by various microorganisms. Analysis of this defence by molecular genetics has now provided a global picture of the mechanisms by which this insect senses infection, discriminates between various classes of microorganisms and induces the production of effector molecules, among which antimicrobial peptides are prominent. An unexpected result of these studies was the discovery that most of the genes involved in the Drosophila host defence are homologous or very similar to genes implicated in mammalian innate immune defences. Recent progress in research on Drosophila immune defence provides evidence for similarities and differences between Drosophila immune responses and mammalian innate immunity.", "title": "" }, { "docid": "d98fce90097705f466382e8bcb0a39b1", "text": "This paper presents a novel vehicular adaptive cruise control (ACC) system that can comprehensively address issues of tracking capability, fuel economy and driver desired response. A hierarchical control architecture is utilized in which a lower controller compensates for nonlinear vehicle dynamics and enables tracking of desired acceleration. The upper controller is synthesized under the framework of model predictive control (MPC) theory. A quadratic cost function is developed that considers the contradictions between minimal tracking error, low fuel consumption and accordance with driver dynamic car-following characteristics while driver longitudinal ride comfort, driver permissible tracking range and rear-end safety are formulated as linear constraints. Employing a constraint softening method to avoid computing infeasibility, an optimal control law is numerically calculated using a quadratic programming algorithm. Detailed simulations with a heavy duty truck show that the developed ACC system provides significant benefits in terms of fuel economy and tracking capability while at the same time also satisfying driver desired car following characteristics.", "title": "" }, { "docid": "07e9b961a1196665538d89b60a30a7d1", "text": "The problem of anomaly detection in time series has received a lot of attention in the past two decades. However, existing techniques cannot locate where the anomalies are within anomalous time series, or they require users to provide the length of potential anomalies. To address these limitations, we propose a self-learning online anomaly detection algorithm that automatically identifies anomalous time series, as well as the exact locations where the anomalies occur in the detected time series. In addition, for multivariate time series, it is difficult to detect anomalies due to the following challenges. First, anomalies may occur in only a subset of dimensions (variables). Second, the locations and lengths of anomalous subsequences may be different in different dimensions. Third, some anomalies may look normal in each individual dimension but different with combinations of dimensions. To mitigate these problems, we introduce a multivariate anomaly detection algorithm which detects anomalies and identifies the dimensions and locations of the anomalous subsequences. We evaluate our approaches on several real-world datasets, including two CPU manufacturing data from Intel. We demonstrate that our approach can successfully detect the correct anomalies without requiring any prior knowledge about the data.", "title": "" }, { "docid": "bec66d4d576f2c5c5643ffe4b72ab353", "text": "Many cities suffer from noise pollution, which compromises people's working efficiency and even mental health. New York City (NYC) has opened a platform, entitled 311, to allow people to complain about the city's issues by using a mobile app or making a phone call; noise is the third largest category of complaints in the 311 data. As each complaint about noises is associated with a location, a time stamp, and a fine-grained noise category, such as \"Loud Music\" or \"Construction\", the data is actually a result of \"human as a sensor\" and \"crowd sensing\", containing rich human intelligence that can help diagnose urban noises. In this paper we infer the fine-grained noise situation (consisting of a noise pollution indicator and the composition of noises) of different times of day for each region of NYC, by using the 311 complaint data together with social media, road network data, and Points of Interests (POIs). We model the noise situation of NYC with a three dimension tensor, where the three dimensions stand for regions, noise categories, and time slots, respectively. Supplementing the missing entries of the tensor through a context-aware tensor decomposition approach, we recover the noise situation throughout NYC. The information can inform people and officials' decision making. We evaluate our method with four real datasets, verifying the advantages of our method beyond four baselines, such as the interpolation-based approach.", "title": "" }, { "docid": "5d40cae84395cc94d68bd4352383d66b", "text": "Scalable High Efficiency Video Coding (SHVC) is the extension of the High Efficiency Video Coding (HEVC). This standard is developed to ameliorate the coding efficiency for the spatial and quality scalability. In this paper, we investigate a survey for SHVC extension. We describe also its types and explain the different additional coding tools that further improve the Enhancement Layer (EL) coding efficiency. Furthermore, we assess through experimental results the performance of the SHVC for different coding configurations. The effectiveness of the SHVC was demonstrated, using two layers, by comparing its coding adequacy compared to simulcast configuration and HEVC for enhancement layer using HM16 for several test sequences and coding conditions.", "title": "" }, { "docid": "466b1889684abb52f2d83d45fbabc4bb", "text": "In this study, we focused on developing a novel 3D Thinning algorithm to extract one-voxel wide skeleton from various 3D objects aiming at preserving the topological information. The 3D Thinning algorithm was testified on computer-generated and real 3D reconstructed image sets acquired from TEMT and compared with other existing 3D Thinning algorithms. It is found that the algorithm has conserved medial axes and simultaneously topologies very well, demonstrating many advantages over the existing technologies. They are versatile, rigorous, efficient and rotation invariant.", "title": "" }, { "docid": "b753eb752d4f87dbff82d77e8417f389", "text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:", "title": "" }, { "docid": "2cbd47c2e7a1f68bd84d18413db26ea3", "text": "Horizontal gene transfer (HGT) refers to the acquisition of foreign genes by organisms. The occurrence of HGT among bacteria in the environment is assumed to have implications in the risk assessment of genetically modified bacteria which are released into the environment. First, introduced genetic sequences from a genetically modified bacterium could be transferred to indigenous micro-organisms and alter their genome and subsequently their ecological niche. Second, the genetically modified bacterium released into the environment might capture mobile genetic elements (MGE) from indigenous micro-organisms which could extend its ecological potential. Thus, for a risk assessment it is important to understand the extent of HGT and genome plasticity of bacteria in the environment. This review summarizes the present state of knowledge on HGT between bacteria as a crucial mechanism contributing to bacterial adaptability and diversity. In view of the use of GM crops and microbes in agricultural settings, in this mini-review we focus particularly on the presence and role of MGE in soil and plant-associated bacteria and the factors affecting gene transfer.", "title": "" }, { "docid": "db9ab90f56a5762ebf6729ffc802a02a", "text": "In this paper we present a novel approach to music analysis, in which a grammar is automatically generated explaining a musical work’s structure. The proposed method is predicated on the hypothesis that the shortest possible grammar provides a model of the musical structure which is a good representation of the composer’s intent. The effectiveness of our approach is demonstrated by comparison of the results with previously-published expert analysis; our automated approach produces results comparable to human annotation. We also illustrate the power of our approach by showing that it is able to locate errors in scores, such as introduced by OMR or human transcription. Further, our approach provides a novel mechanism for intuitive high-level editing and creative transformation of music. A wide range of other possible applications exists, including automatic summarization and simplification; estimation of musical complexity and similarity, and plagiarism detection.", "title": "" }, { "docid": "914c985dc02edd09f0ee27b75ecee6a4", "text": "Whether the development of face recognition abilities truly reflects changes in how faces, specifically, are perceived, or rather can be attributed to more general perceptual or cognitive development, is debated. Event-related potential (ERP) recordings on the scalp offer promise for this issue because they allow brain responses to complex visual stimuli to be relatively well isolated from other sensory, cognitive and motor processes. ERP studies in 5- to 16-year-old children report large age-related changes in amplitude, latency (decreases) and topographical distribution of the early visual components, the P1 and the occipito-temporal N170. To test the face specificity of these effects, we recorded high-density ERPs to pictures of faces, cars, and their phase-scrambled versions from 72 children between the ages of 4 and 17, and a group of adults. We found that none of the previously reported age-dependent changes in amplitude, latency or topography of the P1 or N170 were specific to faces. Most importantly, when we controlled for age-related variations of the P1, the N170 appeared remarkably similar in amplitude and topography across development, with much smaller age-related decreases in latencies than previously reported. At all ages the N170 showed equivalent face-sensitivity: it had the same topography and right hemisphere dominance, it was absent for meaningless (scrambled) stimuli, and larger and earlier for faces than cars. The data also illustrate the large amount of inter-individual and inter-trial variance in young children's data, which causes the N170 to merge with a later component, the N250, in grand-averaged data. Based on our observations, we suggest that the previously reported \"bi-fid\" N170 of young children is in fact the N250. Overall, our data indicate that the electrophysiological markers of face-sensitive perceptual processes are present from 4 years of age and do not appear to change throughout development.", "title": "" }, { "docid": "3cd32b304b7e5b4bc102a5e38ae1f488", "text": "With the growing emphasis on reuse software development process moves toward component based software design As a result there is a need for modeling ap proaches that are capable of considering the architecture of the software and es timating the reliability by taking into account the interactions between the com ponents the utilization of the components and the reliabilities of the components and of their interfaces with other components This paper details the state of the architecture based approach to reliability assessment of component based software and describes how it can be used to examine software behavior right from the de sign stage to implementation and nal deployment First the common requirements of the architecture based models are identi ed and the classi cation is proposed Then the key models in each class are described in detail and the relation among them is discussed A critical analysis of underlying assumptions limitations and applicability of these models is provided which should be helpful in determining the directions for future research", "title": "" }, { "docid": "2aeaffcd6af02f0c61f4cf998a3e630c", "text": "This paper reports on experiments to improve the Optical Character Recognition (ocr) quality of historical text as a preliminary step in text mining. We analyse the quality of ocred text compared to a gold standard and show how it can be improved by performing two automatic correction steps. We also demonstrate the impact this can have on named entity recognition in a preliminary extrinsic evaluation. This work was performed as part of the Trading Consequences project which is focussed on text mining of historical documents for the study of nineteenth century trade in the British Empire.", "title": "" }, { "docid": "19b16abf5ec7efe971008291f38de4d4", "text": "Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the ℓ2-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.", "title": "" }, { "docid": "aa03d917910a3da1f22ceea8f5b8d1c8", "text": "We train a language-universal dependency parser on a multilingual collection of treebanks. The parsing model uses multilingual word embeddings alongside learned and specified typological information, enabling generalization based on linguistic universals and based on typological similarities. We evaluate our parser’s performance on languages in the training set as well as on the unsupervised scenario where the target language has no trees in the training data, and find that multilingual training outperforms standard supervised training on a single language, and that generalization to unseen languages is competitive with existing model-transfer approaches.", "title": "" }, { "docid": "27128d582432a2d76df88bab16f9f835", "text": "During the last twenty years genetic algorithms [6] and other evolutionary algorithms [11] have been applied to many hard problems with very good results. However, for many constrained problems the results were mixed. It seems, that (in general) there has not been any single accepted strategy to deal with constrained problems: most researchers used some ad-hoc methods for handling problem specific constraints. The reason for this phenomena might be that there is an experimental evidence [10] that incorporation of the problem specific knowledge (i.e., the problem's constraints) into the evolutionary algorithm (i.e., into its chromosomal structures and genetic operators) enhances its performance in a significant way. The constraint-handling techniques for evolutionary algorithms can be grouped into a few categories. One way of dealing with candidates that violate the constraints is to generate potential solutions without considering the constraints and then to penalize them by decreasing the \"goodness\" of the evaluation function. In other words, a constrained problem is transformed to an unconstrained one by associating a penalty with all constraint violations; these penalties are included in the function evaluation. Of course, there are a variety of possible penalty functions which can be applied. Some penalty functions assign a constant as a penalty measure. Other penalty functions depend on the degree of violation: the larger violation is, the greater penalty is imposed (however, the growth of the function can be logarithmic, linear, quadratic, exponential, etc. with respect to the size of the violation). Each of these categories of penalty functions has its own disadvantages; in [4] Davis wrote:", "title": "" }, { "docid": "ff2894c10a19212668ce4e6b2750b22d", "text": "Three-phase voltage source converters (VSCs) are commonly used as power flow interface in ac/dc hybrid power systems. The ac power grid suffers from unpredictable short-circuit faults and power flow fluctuations, causing undesirable grid voltage dips. The voltage dips may last for a short time or a long duration, and vary the working conditions of VSCs. Due to their nonlinear characteristics, VSCs may enter abnormal operating mode in response to voltage dips. In this paper, the transient response of three-phase VSCs under practical grid voltage dips is studied and a catastrophic bifurcation phenomenon is identified in the system. The converter will exhibit an irreversible instability after the dips. The expanded magnitude of ac reactive current may cause catastrophic consequence for the system. A full-order eigenvalue analysis and a reduced-order mixed-potential-theory-based analysis are adopted to reveal the physical origin of the large-signal instability phenomenon. The key parameters of the system are identified and the boundaries of instability are located. The bifurcation phenomenon and a set of design-oriented stability boundaries in some chosen parameter space are verified by cycle-by-cycle simulations and experimental measurement on a practical grid-connected VSC prototype.", "title": "" }, { "docid": "d698d49a82829a2bb772d1c3f6c2efc5", "text": "The concepts of Data Warehouse, Cloud Computing and Big Data have been proposed during the era of data flood. By reviewing current progresses in data warehouse studies, this paper introduces a framework to achieve better visualization for Big Data. This framework can reduce the cost of building Big Data warehouses by divide data into sub dataset and visualize them respectively. Meanwhile, basing on the powerful visualization tool of D3.js and directed by the principle of Whole-Parts, current data can be presented to users from different dimensions by different rich statistics graphics.", "title": "" }, { "docid": "dd05688335b4240bbc40919870e30f39", "text": "In this tool report, we present an overview of the Watson system, a Semantic Web search engine providing various functionalities not only to find and locate ontologies and semantic data online, but also to explore the content of these semantic documents. Beyond the simple facade of a search engine for the Semantic Web, we show that the availability of such a component brings new possibilities in terms of developing semantic applications that exploit the content of the Semantic Web. Indeed, Watson provides a set of APIs containing high level functions for finding, exploring and querying semantic data and ontologies that have been published online. Thanks to these APIs, new applications have emerged that connect activities such as ontology construction, matching, sense disambiguation and question answering to the Semantic Web, developed by our group and others. In addition, we also describe Watson as a unprecedented research platform for the study the Semantic Web, and of formalised knowledge in general.", "title": "" } ]
scidocsrr
784b3274c26a7ce84049cd33febd2781
Antecedents of the adoption of online games technologies: The study of adolescent behavior in playing online games
[ { "docid": "34f0a6e303055fc9cdefa52645c27ed5", "text": "Purpose – The purpose of this paper is to identify the factors that influence people to play socially interactive games on mobile devices. Based on network externalities and theory of uses and gratifications (U&G), it seeks to provide direction for further academic research on this timely topic. Design/methodology/approach – Based on 237 valid responses collected from online questionnaires, structural equation modeling technology was employed to examine the research model. Findings – The results reveal that both network externalities and individual gratifications significantly influence the intention to play social games on mobile devices. Time flexibility, however, which is one of the mobile device features, appears to contribute relatively little to the intention to play mobile social games. Originality/value – This research successfully applies a combination of network externalities theory and U&G theory to investigate the antecedents of players’ intentions to play mobile social games. This study is able to provide a better understanding of how two dimensions – perceived number of users/peers and individual gratification – influence mobile game playing, an insight that has not been examined previously in the mobile apps literature.", "title": "" } ]
[ { "docid": "f68b11af8958117f75fc82c40c51c395", "text": "Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world. While aleatory uncertainty refers to the inherent randomness in nature, derived from natural variability of the physical world (e.g., random show of a flipped coin), epistemic uncertainty origins from human's lack of knowledge of the physical world, as well as ability of measuring and modeling the physical world (e.g., computation of the distance between two cities). Different kinds of uncertainty call for different handling methods. Aggarwal, Yu, Sarma, and Zhang et al. have made good surveys on uncertain database management based on the probability theory. This paper reviews multidisciplinary uncertainty processing activities in diverse fields. Beyond the dominant probability theory and fuzzy theory, we also review information-gap theory and recently derived uncertainty theory. Practices of these uncertainty handling theories in the domains of economics, engineering, ecology, and information sciences are also described. It is our hope that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.", "title": "" }, { "docid": "b4f2cbda004ab3c0849f0fe1775c2a7a", "text": "This research investigates the influence of religious preference and practice on the use of contraception. Much of earlier research examines the level of religiosity on sexual activity. This research extends this reasoning by suggesting that peer group effects create a willingness to mask the level of sexuality through the use of contraception. While it is understood that certain religions, that is, Catholicism does not condone the use of contraceptives, this research finds that Catholics are more likely to use certain methods of contraception than other religious groups. With data on contraceptive use from the Center for Disease Control’s Family Growth Survey, a likelihood probability model is employed to investigate the impact religious affiliation on contraception use. Findings suggest a preference for methods that ensure non-pregnancy while preventing feelings of shame and condemnation in their religious communities.", "title": "" }, { "docid": "b1bb036fb8df8174d4c6b27480c2dc89", "text": "Over the past 5 years numerous reports have confirmed and replicated the specific brain cooling and thermal window predictions derived from the thermoregulatory theory of yawning, and no study has found evidence contrary to these findings. Here we review the comparative research supporting this model of yawning among homeotherms, while highlighting a recent report showing how the expression of contagious yawning in humans is altered by seasonal climate variation. The fact that yawning is constrained to a thermal window of ambient temperature provides unique and compelling support in favor of this theory. Heretofore, no existing alternative hypothesis of yawning can explain these results, which have important implications for understanding the potential functional role of this behavior, both physiologically and socially, in humans and other animals. In discussion we stress the broader applications of this work in clinical settings, and counter the various criticisms of this theory.", "title": "" }, { "docid": "50a89110795314b5610fabeaf41f0e40", "text": "People are capable of robust evaluations of their decisions: they are often aware of their mistakes even without explicit feedback, and report levels of confidence in their decisions that correlate with objective performance. These metacognitive abilities help people to avoid making the same mistakes twice, and to avoid overcommitting time or resources to decisions that are based on unreliable evidence. In this review, we consider progress in characterizing the neural and mechanistic basis of these related aspects of metacognition-confidence judgements and error monitoring-and identify crucial points of convergence between methods and theories in the two fields. This convergence suggests that common principles govern metacognitive judgements of confidence and accuracy; in particular, a shared reliance on post-decisional processing within the systems responsible for the initial decision. However, research in both fields has focused rather narrowly on simple, discrete decisions-reflecting the correspondingly restricted focus of current models of the decision process itself-raising doubts about the degree to which discovered principles will scale up to explain metacognitive evaluation of real-world decisions and actions that are fluid, temporally extended, and embedded in the broader context of evolving behavioural goals.", "title": "" }, { "docid": "c85ee4139239b17d98b0d77836e00b72", "text": "We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.", "title": "" }, { "docid": "4ba595a34ae03c1724d434f0cbdbf663", "text": "Studies on the development of protocols for the clonal propagation, through somatic embryogenesis, of coconut have been reported for the past three decades, mostly using inflorescence explants, but with low reproducibility and efficiency. Recent improvements in these respects have been achieved using plumular explants. Here, we report a developmental study of embryogenesis in plumule explants using histological techniques in order to extend our understanding of this process. Coconut plumule explants consisted of the shoot meristem including leaf primordia. At day 15 of culture, the explants did not show any apparent growth; however, a transverse section showed noticeable growth of the plumular leaves forming a ring around the inner leaves and the shoot meristem, which did not show any apparent growth. At day 30, the shoot meristem started to grow and the plumular leaves continued growing., At day 45, the explants were still compact and white in color, but showed partial dedifferentiation and meristematic cell proliferation leading to the development of callus structures with a translucent appearance. After 60 d, these meristematic cells evolved into nodular structures. At day 75, the nodular structures became pearly globular structures on the surface of translucent structures, from which somatic embryos eventually formed and presented well-developed root and caulinar meristems. These results allow better insights and an integrated view into the somatic embryogenesis process in coconut plumule explants, which could be helpful for future studies that eventually could lead us to improved control of the process and greater efficiency of somatic embryo and plantlet formation.", "title": "" }, { "docid": "9e9f967d9e19ab88830a91290e7ac6e7", "text": "Planning for the information systems in an organization generally has not been closely related to the overall strategic planning processes through which the organization prepares for its future. An M/S strategic planning process is conceptualized and illustrated as one which /inks the. organization’s \"strategy set\" tb an MIS \"strategy set. \" The literature of management information systems (MIS) concentrates largely on the nature and structure of MIS’s and on processes for designing and developing such systems. The idea of \"planning for the MIS\" is usually treated as either one of developing the need and the general design concept for such a system, or in the context of project planning for the MIS development effort. However, strategic planning for the informational needs of the organization is both feasible and necessary if the MIS is to support the basic purposes and goals of the organization. Indeed, one of the possible explanations [6] for the failure of many MIS’s i~-that they have been designed from the same \"bottom up\" point of view that characterized the development of the data processing systems of an earlier era. Such design approaches primarily reflect the pursuit of efficiency, such as through cost savings, rather than the pursuit of greater organizational effectiveness.1 The modern view of an MIS as an organizational decision support system is inconsistent with the design/development approaches which are appropriate for data processing. The organization’s operating efficiency is but one aspect for consideration in management decision making. The achievement of greater organizational effectiveness is the paramount consideration in most of the management decisions which the MIS is to support; it also must be of paramount importance in the design of the MIS. There is an intrinsic linkage of the decisionsupporting MIS to the organization’s purpose, objectives, and strategy. While.this conclusion may appear to be straightforward, it has not been operationalized as a part of MIS design methodology. There are those who argue that the MIS designer cannot hope to get involved in such things as organizational missions, objectives, and strategies, since they are clearly beyond his domain of authority. This article describes an operationally feasible approach for identifying and utilizing the elements of the organization’s \"strategy set\" to plan for the MIS. Whether or not written state", "title": "" }, { "docid": "ada7b43edc18b321c57a978d7a3859ae", "text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.", "title": "" }, { "docid": "d509cb384ecddafa0c4f866882af2c77", "text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.", "title": "" }, { "docid": "410d4b0eb8c60517506b0d451cf288ba", "text": "Prepositional phrases (PPs) express crucial information that knowledge base construction methods need to extract. However, PPs are a major source of syntactic ambiguity and still pose problems in parsing. We present a method for resolving ambiguities arising from PPs, making extensive use of semantic knowledge from various resources. As training data, we use both labeled and unlabeled data, utilizing an expectation maximization algorithm for parameter estimation. Experiments show that our method yields improvements over existing methods including a state of the art dependency parser.", "title": "" }, { "docid": "1d15a6b19d4b36fec96afc0e5f55cd25", "text": "Image captioning has been recently gaining a lot of attention thanks to the impressive achievements shown by deep captioning architectures, which combine Convolutional Neural Networks to extract image representations and Recurrent Neural Networks to generate the corresponding captions. At the same time, a significant research effort has been dedicated to the development of saliency prediction models, which can predict human eye fixations. Even though saliency information could be useful to condition an image captioning architecture, by providing an indication of what is salient and what is not, research is still struggling to incorporate these two techniques. In this work, we propose an image captioning approach in which a generative recurrent neural network can focus on different parts of the input image during the generation of the caption, by exploiting the conditioning given by a saliency prediction model on which parts of the image are salient and which are contextual. We show, through extensive quantitative and qualitative experiments on large-scale datasets, that our model achieves superior performance with respect to captioning baselines with and without saliency and to different state-of-the-art approaches combining saliency and captioning.", "title": "" }, { "docid": "7a05f2c12c3db9978807eb7c082db087", "text": "This paper discusses the importance, the complexity and the challenges of mapping mobile robot’s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.", "title": "" }, { "docid": "b07f858d08f40f61f3ed418674948f12", "text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.", "title": "" }, { "docid": "b1810c928902c96784b922c304079641", "text": "The rapid proliferation of wireless networks and mobile computing applications has changed the landscape of network security. The traditional way of protecting networks with firewalls and encryption software is no longer sufficient and effective. We need to search for new architecture and mechanisms to protect the wireless networks and mobile computing application. In this paper, we examine the vulnerabilities of wireless networks and argue that we must include intrusion detection in the security architecture for mobile computing environment. We have developed such an architecture and evaluated a key mechanism in this architecture, anomaly detection for mobile ad-hoc network, through simulation experiments.", "title": "" }, { "docid": "7b2d1af8db446019ba45511098dddefe", "text": "This article proposes a novel online portfolio selection strategy named “Passive Aggressive Mean Reversion” (PAMR). Unlike traditional trend following approaches, the proposed approach relies upon the mean reversion relation of financial markets. Equipped with online passive aggressive learning technique from machine learning, the proposed portfolio selection strategy can effectively exploit the mean reversion property of markets. By analyzing PAMR’s update scheme, we find that it nicely trades off between portfolio return and volatility risk and reflects the mean reversion trading principle. We also present several variants of PAMR algorithm, including a mixture algorithm which mixes PAMR and other strategies. We conduct extensive numerical experiments to evaluate the empirical performance of the proposed algorithms on various real datasets. The encouraging results show that in most cases the proposed PAMR strategy outperforms all benchmarks and almost all state-of-the-art portfolio selection strategies under various performance metrics. In addition to its superior performance, the proposed PAMR runs extremely fast and thus is very suitable for real-life online trading applications. The experimental testbed including source codes and data sets is available at  http://www.cais.ntu.edu.sg/~chhoi/PAMR/ .", "title": "" }, { "docid": "7f52960fb76c3c697ef66ffee91b13ee", "text": "The aim of this work was to explore the feasibility of combining hot melt extrusion (HME) with 3D printing (3DP) technology, with a view to producing different shaped tablets which would be otherwise difficult to produce using traditional methods. A filament extruder was used to obtain approx. 4% paracetamol loaded filaments of polyvinyl alcohol with characteristics suitable for use in fused-deposition modelling 3DP. Five different tablet geometries were successfully 3D-printed-cube, pyramid, cylinder, sphere and torus. The printing process did not affect the stability of the drug. Drug release from the tablets was not dependent on the surface area but instead on surface area to volume ratio, indicating the influence that geometrical shape has on drug release. An erosion-mediated process controlled drug release. This work has demonstrated the potential of 3DP to manufacture tablet shapes of different geometries, many of which would be challenging to manufacture by powder compaction.", "title": "" }, { "docid": "17cb27030abc5054b8f51256bdee346a", "text": "Purpose – This paper seeks to define and describe agile project management using the Scrum methodology as a method for more effectively managing and completing projects. Design/methodology/approach – This paper provides a general overview and introduction to the concepts of agile project management and the Scrum methodology in particular. Findings – Agile project management using the Scrum methodology allows project teams to manage digital library projects more effectively by decreasing the amount of overhead dedicated to managing the project. Using an iterative process of continuous review and short-design time frames, the project team is better able to quickly adapt projects to rapidly evolving environments in which systems will be used. Originality/value – This paper fills a gap in the digital library project management literature by providing an overview of agile project management methods.", "title": "" }, { "docid": "9b9425132e89d271ed6baa0dbc16b941", "text": "Although personalized recommendation has been investigated for decades, the wide adoption of Latent Factor Models (LFM) has made the explainability of recommendations a critical issue to both the research community and practical application of recommender systems. For example, in many practical systems the algorithm just provides a personalized item recommendation list to the users, without persuasive personalized explanation about why such an item is recommended while another is not. Unexplainable recommendations introduce negative effects to the trustworthiness of recommender systems, and thus affect the effectiveness of recommendation engines. In this work, we investigate explainable recommendation in aspects of data explainability, model explainability, and result explainability, and the main contributions are as follows: 1. Data Explainability: We propose Localized Matrix Factorization (LMF) framework based Bordered Block Diagonal Form (BBDF) matrices, and further applied this technique for parallelized matrix factorization. 2. Model Explainability: We propose Explicit Factor Models (EFM) based on phrase-level sentiment analysis, as well as dynamic user preference modeling based on time series analysis. In this work, we extract product features and user opinions towards different features from large-scale user textual reviews based on phrase-level sentiment analysis techniques, and introduce the EFM approach for explainable model learning and recommendation. 3. Economic Explainability: We propose the Total Surplus Maximization (TSM) framework for personalized recommendation, as well as the model specification in different types of online applications. Based on basic economic concepts, we provide the definitions of utility, cost, and surplus in the application scenario of Web services, and propose the general framework of web total surplus calculation and maximization.", "title": "" }, { "docid": "7b2bf230751b29044ecf36efc3961bf5", "text": "A double inverted pendulum plant has been in the domain of control researchers as an established model for studies on stability. The stability of such as a system taking the linearized plant dynamics has yielded satisfactory results by many researchers using classical control techniques. The established model that is analyzed as part of this work was tested under the influence of time delay, where the controller was fine tuned using a BAT algorithm taking into considering the fitness function of square of error. This proposed method gave results which were better when compared without time delay wherein the calculated values indicated the issues when incorporating time delay.", "title": "" }, { "docid": "ce31be5bfeb05a30c5479a3192d20f93", "text": "Network embedding represents nodes in a continuous vector space and preserves structure information from the Network. Existing methods usually adopt a “one-size-fits-all” approach when concerning multi-scale structure information, such as firstand second-order proximity of nodes, ignoring the fact that different scales play different roles in the embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding(AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: 1) Attention-based autoencoder effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training. 2) An adversarial regularization guides the autoencoder learn robust representations by matching the posterior distribution of the latent embeddings to given prior distribution. This is the first attempt to introduce attention mechanisms to multi-scale network embedding. Experimental results on realworld networks show that our learned attention parameters are different for every network and the proposed approach outperforms existing state-ofthe-art approaches for network embedding.", "title": "" } ]
scidocsrr
ae5adc99aab961a670843dfb839befb6
Collaborative creativity: a complex systems model with distributed affect
[ { "docid": "4f5272a35c9991227a6d098209de8d6c", "text": "This is an investigation of \" Online Creativity. \" I will present a new account of the cognitive and social mechanisms underlying complex thinking of creative scientists as they work on significant problems in contemporary science. I will lay out an innovative methodology that I have developed for investigating creative and complex thinking in a real-world context. Using this method, I have discovered that there are a number of strategies that are used in contemporary science that increase the likelihood of scientists making discoveries. The findings reported in this chapter provide new insights into complex scientific thinking and will dispel many of the myths surrounding the generation of new concepts and scientific discoveries. InVivo cognition: A new way of investigating cognition There is a large background in cognitive research on thinking, reasoning and problem solving processes that form the foundation for creative cognition (see Dunbar, in press, Holyoak 1996 for recent reviews). However, to a large extent, research on reasoning has demonstrated that subjects in psychology experiments make vast numbers of thinking and reasoning errors even in the most simple problems. How is creative thought even possible if people make so many reasoning errors? One problem with research on reasoning is that the concepts and stimuli that the subjects are asked to use are often arbitrary and involve no background knowledge (cf. Dunbar, 1995; Klahr & Dunbar, 1988). I have proposed that one way of determining what reasoning errors are specific and which are general is to investigate cognition in the cognitive laboratory and the real world (Dunbar, 1995). Psychologists should conduct both InVitro and InVivo research to understand thinking. InVitro research is the standard psychological experiment where subjects are brought into the laboratory and controlled experiments are conducted. As can be seen from the research reported in this volume, this approach yields many insights into the psychological mechanisms underlying complex thinking. The use of an InVivo methodology in which online thinking and reasoning are investigated in a real-world context yields fundamental insights into the basic cognitive mechanisms underlying complex cognition and creativity. The results of InVivo cognitive research can then be used as a basis for further InVitro work in which controlled experiments are conducted. In this chapter, I will outline some of the results of my ongoing InVivo research on creative scientific thinking and relate this research back to the more common InVitro research and show that the …", "title": "" }, { "docid": "8d3c1e649e40bf72f847a9f8ac6edf38", "text": "Many organizations are forming “virtual teams” of geographically distributed knowledge workers to collaborate on a variety of workplace tasks. But how effective are these virtual teams compared to traditional face-to-face groups? Do they create similar teamwork and is information exchanged as effectively? An exploratory study of a World Wide Web-based asynchronous computer conference system known as MeetingWebTM is presented and discussed. It was found that teams using this computer-mediated communication system (CMCS) could not outperform traditional (face-to-face) teams under otherwise comparable circumstances. Further, relational links among team members were found to be a significant contributor to the effectiveness of information exchange. Though virtual and face-to-face teams exhibit similar levels of communication effectiveness, face-to-face team members report higher levels of satisfaction. Therefore, the paper presents steps that can be taken to improve the interaction experience of virtual teams. Finally, guidelines for creating and managing virtual teams are suggested, based on the findings of this research and other authoritative sources. Subject Areas: Collaboration, Computer Conference, Computer-mediated Communication Systems (CMCS), Internet, Virtual Teams, and World Wide Web. *The authors wish to thank the Special Focus Editor and the reviewers for their thoughtful critique of the earlier versions of this paper. We also wish to acknowledge the contributions of the Northeastern University College of Business Administration and its staff, which provided the web server and the MeetingWebTM software used in these experiments.", "title": "" } ]
[ { "docid": "f82ce890d66c746a169a38fdad702749", "text": "The following review paper presents an overview of the current crop yield forecasting methods and early warning systems for the global strategy to improve agricultural and rural statistics across the globe. Different sections describing simulation models, remote sensing, yield gap analysis, and methods to yield forecasting compose the manuscript. 1. Rationale Sustainable land management for crop production is a hierarchy of systems operating in— and interacting with—economic, ecological, social, and political components of the Earth. This hierarchy ranges from a field managed by a single farmer to regional, national, and global scales where policies and decisions influence crop production, resource use, economics, and ecosystems at other levels. Because sustainability concepts must integrate these diverse issues, agricultural researchers who wish to develop sustainable productive systems and policy makers who attempt to influence agricultural production are confronted with many challenges. A multiplicity of problems can prevent production systems from being sustainable; on the other hand, with sufficient attention to indicators of sustainability, a number of practices and policies could be implemented to accelerate progress. Indicators to quantify changes in crop production systems over time at different hierarchical levels are needed for evaluating the sustainability of different land management strategies. To develop and test sustainability concepts and yield forecast methods globally, it requires the implementation of long-term crop and soil management experiments that include measurements of crop yields, soil properties, biogeochemical fluxes, and relevant socioeconomic indicators. Long-term field experiments cannot be conducted with sufficient detail in space and time to find the best land management practices suitable for sustainable crop production. Crop and soil simulation models, when suitably tested in reasonably diverse space and time, provide a critical tool for finding combinations of management strategies to reach multiple goals required for sustainable crop production. The models can help provide land managers and policy makers with a tool to extrapolate experimental results from one location to others where there is a lack of response information. Agricultural production is significantly affected by environmental factors. Weather influences crop growth and development, causing large intra-seasonal yield variability. In addition, spatial variability of soil properties, interacting with the weather, cause spatial yield variability. Crop agronomic management (e.g. planting, fertilizer application, irrigation, tillage, and so on) can be used to offset the loss in yield due to effects of weather. As a result, yield forecasting represents an important tool for optimizing crop yield and to evaluate the crop-area insurance …", "title": "" }, { "docid": "d79d6dd8267c66ad98f33bd54ff68693", "text": "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.", "title": "" }, { "docid": "3f6fcee0073e7aaf587602d6510ed913", "text": "BACKGROUND\nTreatment of early onset scoliosis (EOS) is challenging. In many cases, bracing will not be effective and growing rod surgery may be inappropriate. Serial, Risser casts may be an effective intermediate method of treatment.\n\n\nMETHODS\nWe studied 20 consecutive patients with EOS who received serial Risser casts under general anesthesia between 1999 and 2011. Analyses included diagnosis, sex, age at initial cast application, major curve severity, initial curve correction, curve magnitude at the time of treatment change or latest follow-up for those still in casts, number of casts per patient, the type of subsequent treatment, and any complications.\n\n\nRESULTS\nThere were 8 patients with idiopathic scoliosis, 6 patients with neuromuscular scoliosis, 5 patients with syndromic scoliosis, and 1 patient with skeletal dysplasia. Fifteen patients were female and 5 were male. The mean age at first cast was 3.8±2.3 years (range, 1 to 8 y), and the mean major curve magnitude was 74±18 degrees (range, 40 to 118 degrees). After initial cast application, the major curve measured 46±14 degrees (range, 25 to 79 degrees). At treatment change or latest follow-up for those still in casts, the major curve measured 53±24 degrees (range, 13 to 112 degrees). The mean time in casts was 16.9±9.1 months (range, 4 to 35 mo). The mean number of casts per patient was 4.7±2.2 casts (range, 1 to 9 casts). At the time of this study, 7 patients had undergone growing rod surgery, 6 patients were still undergoing casting, 5 returned to bracing, and 2 have been lost to follow-up. Four patients had minor complications: 2 patients each with superficial skin irritation and cast intolerance.\n\n\nCONCLUSIONS\nSerial Risser casting is a safe and effective intermediate treatment for EOS. It can stabilize relatively large curves in young children and allows the child to reach a more suitable age for other forms of treatment, such as growing rods.\n\n\nLEVEL OF EVIDENCE\nLevel IV; case series.", "title": "" }, { "docid": "4fa68f011f7cb1b4874dd4b10070be17", "text": "This paper demonstrates the development of ontology for satellite databases. First, I create a computational ontology for the Union of Concerned Scientists (UCS) Satellite Database (UCSSD for short), called the UCS Satellite Ontology (or UCSSO). Second, in developing UCSSO I show that The Space Situational Awareness Ontology (SSAO)-—an existing space domain reference ontology—-and related ontology work by the author (Rovetto 2015, 2016) can be used either (i) with a database-specific local ontology such as UCSSO, or (ii) in its stead. In case (i), local ontologies such as UCSSO can reuse SSAO terms, perform term mappings, or extend it. In case (ii), the author_s orbital space ontology work, such as the SSAO, is usable by the UCSSD and organizations with other space object catalogs, as a reference ontology suite providing a common semantically-rich domain model. The SSAO, UCSSO, and the broader Orbital Space Environment Domain Ontology project is online at https://purl.org/space-ontology and GitHub. This ontology effort aims, in part, to provide accurate formal representations of the domain for various applications. Ontology engineering has the potential to facilitate the sharing and integration of satellite data from federated databases and sensors for safer spaceflight.", "title": "" }, { "docid": "28cbdb82603c720efba6880034344b94", "text": "An experiment is reported which tests Fazey & Hardy's (1988) catastrophe model of anxiety and performance. Eight experienced basketball players were required to perform a set shooting task, under conditions of high and low cognitive anxiety. On each of these occasions, physiological arousal was manipulated by means of physical work in such a way that subjects were tested with physiological arousal increasing and decreasing. Curve-fitting procedures followed by non-parametric tests of significance confirmed (p less than .002) Fazey & Hardy's hysteresis hypothesis: namely, that the polynomial curves for the increasing vs. decreasing arousal conditions would be horizontally displaced relative to each other in the high cognitive anxiety condition, but superimposed on top of one another in the low cognitive anxiety condition. Other non-parametric procedures showed that subjects' maximum performances were higher, their minimum performances lower, and their critical decrements in performance greater in the high cognitive anxiety condition than in the low cognitive anxiety condition. These results were taken as strong support for Fazey & Hardy's catastrophe model of anxiety and performance. The implications of the model for current theorizing on the anxiety-performance relationship are also discussed.", "title": "" }, { "docid": "37d77131c6100aceb4a4d49a5416546f", "text": "Automated medical image analysis has a significant value in diagnosis and treatment of lesions. Brain tumors segmentation has a special importance and difficulty due to the difference in appearances and shapes of the different tumor regions in magnetic resonance images. Additionally the data sets are heterogeneous and usually limited in size in comparison with the computer vision problems. The recently proposed adversarial training has shown promising results in generative image modeling. In this paper we propose a novel end-to-end trainable architecture for brain tumor semantic segmentation through conditional adversarial training. We exploit conditional Generative Adversarial Network (cGAN) and train a semantic segmentation Convolution Neural Network (CNN) along with an adversarial network that discriminates segmentation maps coming from the ground truth or from the segmentation network for BraTS 2017 segmentation task[15,4,2,3]. We also propose an end-to-end trainable CNN for survival day prediction based on deep learning techniques for BraTS 2017 prediction task [15,4,2,3]. The experimental results demonstrate the superior ability of the proposed approach for both tasks. The proposed model achieves on validation data a DICE score, Sensitivity and Specificity respectively 0.68, 0.99 and 0.98 for the whole tumor, regarding online judgment system.", "title": "" }, { "docid": "f7535a097b65dccf1ee8e615244d98c5", "text": "Wireless power transfer via magnetic resonant coupling is experimentally demonstrated in a system with a large source coil and either one or two small receivers. Resonance between source and load coils is achieved with lumped capacitors terminating the coils. A circuit model is developed to describe the system with a single receiver, and extended to describe the system with two receivers. With parameter values chosen to obtain good fits, the circuit models yield transfer frequency responses that are in good agreement with experimental measurements over a range of frequencies that span the resonance. Resonant frequency splitting is observed experimentally and described theoretically for the multiple receiver system. In the single receiver system at resonance, more than 50% of the power that is supplied by the actual source is delivered to the load. In a multiple receiver system, a means for tracking frequency shifts and continuously retuning the lumped capacitances that terminate each receiver coil so as to maximize efficiency is a key issue for future work.", "title": "" }, { "docid": "1090297224c76a5a2c4ade47cb932dba", "text": "Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.", "title": "" }, { "docid": "403becc6c79d81204493c3cacdd3ee4d", "text": "Studies of protein nutrition and biochemistry require reliable methods for analysis of amino acid (AA) composition in polypeptides of animal tissues and foods. Proteins are hydrolyzed by 6M HCl (110°C for 24h), 4.2M NaOH (105°C for 20 h), or proteases. Analytical techniques that require high-performance liquid chromatography (HPLC) include pre-column derivatization with 4-chloro-7-nitrobenzofurazan, 9-fluorenyl methylchloroformate, phenylisothiocyanate, naphthalene-2,3-dicarboxaldehyde, 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate, and o-phthaldialdehyde (OPA). OPA reacts with primary AA (except cysteine or cystine) in the presence of 2-mercaptoethanol or 3-mercaptopropionic acid to form a highly fluorescent adduct. OPA also reacts with 4-amino-1-butanol and 4-aminobutane-1,3-diol produced from oxidation of proline and 4-hydroxyproline, respectively, in the presence of chloramine-T plus sodium borohydride at 60°C, or with S-carboxymethyl-cysteine formed from cysteine and iodoacetic acid at 25°C. Fluorescence of OPA derivatives is monitored at excitation and emission wavelengths of 340 and 455 nm, respectively. Detection limits are 50 fmol for AA. This technique offers the following advantages: simple procedures for preparation of samples, reagents, and mobile-phase solutions; rapid pre-column formation of OPA-AA derivatives and their efficient separation at room temperature (e.g., 20-25°C); high sensitivity of detection; easy automation on the HPLC apparatus; few interfering side reactions; a stable chromatography baseline for accurate integration of peak areas; and rapid regeneration of guard and analytical columns. Thus, the OPA method provides a useful tool to determine AA composition in proteins of animal tissues (e.g., skeletal muscle, liver, intestine, placenta, brain, and body homogenates) and foods (e.g., milk, corn grain, meat, and soybean meal).", "title": "" }, { "docid": "3c3c30050b32b46c28abef3ecff06376", "text": "The analysis of social, communication and information networks for identifying patterns, evolutionary characteristics and anomalies is a key problem for the military, for instance in the Intelligence community. Current techniques do not have the ability to discern unusual features or patterns that are not a priori known. We investigate the use of deep learning for network analysis. Over the last few years, deep learning has had unprecedented success in areas such as image classification, speech recognition, etc. However, research on the use of deep learning to network or graph analysis is limited. We present three preliminary techniques that we have developed as part of the ARL Network Science CTA program: (a) unsupervised classification using a very highly trained image recognizer, namely Caffe; (b) supervised classification using a variant of convolutional neural networks on node features such as degree and assortativity; and (c) a framework called node2vec for learning representations of nodes in a network using a mapping to natural language processing.", "title": "" }, { "docid": "c9b7ddb6eb1431fcc508d29a1f25104b", "text": "The problem of finding the missing values of a matrix given a few of its entries, called matrix completion, has gathered a lot of attention in the recent years. Although the problem under the standard low rank assumption is NP-hard, Candès and Recht showed that it can be exactly relaxed if the number of observed entries is sufficiently large. In this work, we introduce a novel matrix completion model that makes use of proximity information about rows and columns by assuming they form communities. This assumption makes sense in several real-world problems like in recommender systems, where there are communities of people sharing preferences, while products form clusters that receive similar ratings. Our main goal is thus to find a low-rank solution that is structured by the proximities of rows and columns encoded by graphs. We borrow ideas from manifold learning to constrain our solution to be smooth on these graphs, in order to implicitly force row and column proximities. Our matrix recovery model is formulated as a convex non-smooth optimization problem, for which a well-posed iterative scheme is provided. We study and evaluate the proposed matrix completion on synthetic and real data, showing that the proposed structured low-rank recovery model outperforms the standard matrix completion model in many situations.", "title": "" }, { "docid": "f274322ad7eed4829945bc3d483ceecb", "text": "In this paper, an observer problem from a computer vision application is studied. Rigid body pose estimation using inertial sensors and a monocular camera is considered and it is shown how rotation estimation can be decoupled from position estimation. Orientation estimation is formulated as an observer problem with implicit output where the states evolve on (3). A careful observability study reveals interesting group theoretic structures tied to the underlying system structure. A locally convergent observer where the states evolve on (3) is proposed and numerical estimates of the domain of attraction is given. Further, it is shown that, given convergent orientation estimates, position estimation can be formulated as a linear implicit output problem. From an applications perspective, it is outlined how delayed low bandwidth visual observations and high bandwidth rate gyro measurements can provide high bandwidth estimates. This is consistent with real-time constraints due to the complementary characteristics of the sensors which are fused in a multirate way.", "title": "" }, { "docid": "aeb039a1e5ae76bf8e928e6b8cbfdf7f", "text": "ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.", "title": "" }, { "docid": "817d0da77bcdd0c695d2c064f5ed9f69", "text": "Intuition-based learning (IBL) has been used in various problem-solving areas such as risk analysis, medical diagnosis and criminal investigation. However, conventional IBL has the limitation that it has no criterion for choosing the trusted intuition based on the knowledge and experience. The purpose of this paper is to develop a learning model for human-computer cooperative from user’s perspective. We have established the theoretical foundation and conceptualization of the constructs for learning system with trusted intuition. And suggest a new machine learning technique called Trusted Intuition Network (TIN). We have developed a general instrument capable of reliably and accurately measuring trusted intuition in the context of intuitive learning systems. We also compare the results with the learning methods, artificial intuition networks and conventional IBL. The results of this paper show that the proposed technique outperforms those of many other methods, it overcomes the limitation of conventional IBL, and it provides improved uncertainty learning theory.", "title": "" }, { "docid": "bacd81a1074a877e0c943a6755290d34", "text": "This thesis addresses the problem of scheduling multiple, concurrent, adaptively parallel jobs on a multiprogrammed shared-memory multiprocessor. Adaptively parallel jobs are jobs for which the number of processors that can be used without waste varies during execution. We focus on the specific case of parallel jobs that are scheduled using a randomized work-stealing algorithm, as is used in the Cilk multithreaded language. We begin by developing a theoretical model for two-level scheduling systems, or those in which the operating system allocates processors to jobs, and the jobs schedule their threads on the processors. To analyze the performance of a job scheduling algorithm, we model the operating system as an adversary. We show that a greedy scheduler achieves an execution time that is within a factor of 2 of optimal under these conditions. Guided by our model, we present a randomized work-stealing algorithm for adaptively parallel jobs, algorithm WSAP, which takes a unique approach to estimating the processor desire of a job. We show that attempts to directly measure a job’s instantaneous parallelism are inherently misleading. We also describe a dynamic processor-allocation algorithm, algorithm DP, that allocates processors to jobs in a fair and efficient way. Using these two algorithms, we present the design and implementation of Cilk-AP, a two-level scheduling system for adaptively parallel workstealing jobs. Cilk-AP is implemented by extending the runtime system of Cilk. We tested the Cilk-AP system on a shared-memory symmetric multiprocessor (SMP) with 16 processors. Our experiments show that, relative to the original Cilk system, Cilk-AP incurs negligible overhead and provides up to 37% improvement in throughput and 30% improvement in response time in typical multiprogramming scenarios. This thesis represents joint work with Charles Leiserson and Kunal Agrawal of the Supercomputing Technologies Group at MIT’s Computer Science and Artificial Intelligence Laboratory. Thesis Supervisor: Charles E. Leiserson Title: Professor", "title": "" }, { "docid": "baa5eff969c4c81c863ec4c4c6ce7734", "text": "The research describes a rapid method for the determination of fatty acid (FA) contents in a micro-encapsulated fish-oil (μEFO) supplement by using attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopic technique and partial least square regression (PLSR) analysis. Using the ATR-FTIR technique, the μEFO powder samples can be directly analysed without any pre-treatment required, and our developed PLSR strategic approach based on the acquired spectral data led to production of a good linear calibration with R(2)=0.99. In addition, the subsequent predictions acquired from an independent validation set for the target FA compositions (i.e., total oil, total omega-3 fatty acids, EPA and DHA) were highly accurate when compared to the actual values obtained from standard GC-based technique, with plots between predicted versus actual values resulting in excellent linear fitting (R(2)≥0.96) in all cases. The study therefore demonstrated not only the substantial advantage of the ATR-FTIR technique in terms of rapidness and cost effectiveness, but also its potential application as a rapid, potentially automated, online monitoring technique for the routine analysis of FA composition in industrial processes when used together with the multivariate data analysis modelling.", "title": "" }, { "docid": "4c7624e4d1674a753fb54d2a826c3666", "text": "We tackle the question: how much supervision is needed to achieve state-of-the-art performance in part-of-speech (POS) tagging, if we leverage lexical representations given by the model of Brown et al. (1992)? It has become a standard practice to use automatically induced “Brown clusters” in place of POS tags. We claim that the underlying sequence model for these clusters is particularly well-suited for capturing POS tags. We empirically demonstrate this claim by drastically reducing supervision in POS tagging with these representations. Using either the bit-string form given by the algorithm of Brown et al. (1992) or the (less well-known) embedding form given by the canonical correlation analysis algorithm of Stratos et al. (2014), we can obtain 93% tagging accuracy with just 400 labeled words and achieve state-of-the-art accuracy (> 97%) with less than 1 percent of the original training data.", "title": "" }, { "docid": "54af3c39dba9aafd5b638d284fd04345", "text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).", "title": "" }, { "docid": "4bc910cb711aab699d9ec4e81cd0ce17", "text": "This study examined the links between desensitization to violent media stimuli and habitual media violence exposure as a predictor and aggressive cognitions and behavior as outcome variables. Two weeks after completing measures of habitual media violence exposure, trait aggression, trait arousability, and normative beliefs about aggression, undergraduates (N = 303) saw a violent film clip and a sad or a funny comparison clip. Skin conductance level (SCL) was measured continuously, and ratings of anxious and pleasant arousal were obtained after each clip. Following the clips, participants completed a lexical decision task to measure accessibility of aggressive cognitions and a competitive reaction time task to measure aggressive behavior. Habitual media violence exposure correlated negatively with SCL during violent clips and positively with pleasant arousal, response times for aggressive words, and trait aggression, but it was unrelated to anxious arousal and aggressive responding during the reaction time task. In path analyses controlling for trait aggression, normative beliefs, and trait arousability, habitual media violence exposure predicted faster accessibility of aggressive cognitions, partly mediated by higher pleasant arousal. Unprovoked aggression during the reaction time task was predicted by lower anxious arousal. Neither habitual media violence usage nor anxious or pleasant arousal predicted provoked aggression during the laboratory task, and SCL was unrelated to aggressive cognitions and behavior. No relations were found between habitual media violence viewing and arousal in response to the sad and funny film clips, and arousal in response to the sad and funny clips did not predict aggressive cognitions or aggressive behavior on the laboratory task. This suggests that the observed desensitization effects are specific to violent content.", "title": "" }, { "docid": "697ed30a5d663c1dda8be0183fa4a314", "text": "Due to the Web expansion, the prediction of online news popularity is becoming a trendy research topic. In this paper, we propose a novel and proactive Intelligent Decision Support System (IDSS) that analyzes articles prior to their publication. Using a broad set of extracted features (e.g., keywords, digital media content, earlier popularity of news referenced in the article) the IDSS first predicts if an article will become popular. Then, it optimizes a subset of the articles features that can more easily be changed by authors, searching for an enhancement of the predicted popularity probability. Using a large and recently collected dataset, with 39,000 articles from the Mashable website, we performed a robust rolling windows evaluation of five state of the art models. The best result was provided by a Random Forest with a discrimination power of 73%. Moreover, several stochastic hill climbing local searches were explored. When optimizing 1000 articles, the best optimization method obtained a mean gain improvement of 15 percentage points in terms of the estimated popularity probability. These results attest the proposed IDSS as a valuable tool for online news authors.", "title": "" } ]
scidocsrr
e4003e7c2bc849b3b3a60c67834e7a31
The affective shift model of work engagement.
[ { "docid": "cfddb85a8c81cb5e370fe016ea8d4c5b", "text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.", "title": "" }, { "docid": "b89099e9b01a83368a1ebdb2f4394eba", "text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.", "title": "" } ]
[ { "docid": "e82459841d697a538f3ab77817ed45e7", "text": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.", "title": "" }, { "docid": "0e2d5444d16f7c710039f6145473131c", "text": "In this paper, a novel design approach for the development of robot hands is presented. This approach, that can be considered alternative to the “classical” one, takes into consideration compliant structures instead of rigid ones. Compliance effects, which were considered in the past as a “defect” to be mechanically eliminated, can be viceversa regarded as desired features and can be properly controlled in order to achieve desired properties from the robotic device. In particular, this is true for robot hands, where the mechanical complexity of “classical” design solutions has always originated complicated structures, often with low reliability and high costs. In this paper, an alternative solution to the design of dexterous robot hand is illustrated, considering a “mechatronic approach” for the integration of the mechanical structure, the sensory and electronic system, the control and the actuation part. Moreover, the preliminary experimental activity on a first prototype is reported and discussed. The results obtained so far, considering also reliability, costs and development time, are very encouraging, and allows to foresee a wider diffusion of dextrous hands for robotic applications.", "title": "" }, { "docid": "cc17ac1e38c98d3066cc63b15b931726", "text": "We present BPMN Miner 2.0: a tool that extracts hierarchical and block-structured BPMN process models from event logs. Given an event log in XES format, the tool partitions it into sub-logs (one per subprocess) and discovers a BPMN process model from each sub-log using existing techniques for discovering BPMN process models via heuristics nets or Petri nets. A drawback of these techniques is that they often produce spaghetti-like models and in some cases unsound models. Accordingly, BPMN Miner 2.0 applies post-processing steps to remove unsound constructions as well as a technique to block-structrure the resulting process models in a behavior-preserving manner. The tool is available as a standalone Java tool as well as a ProM and an Apromore plugin. The target audience of this demonstration includes process mining researchers as well as practitioners interested in exploring the potential of process mining using BPMN.", "title": "" }, { "docid": "a4f074b8e6b6c826e14b8f245a63b227", "text": "The high natural abundance of silicon, together with its excellent reliability and good efficiency in solar cells, suggest its continued use in production of solar energy, on massive scales, for the foreseeable future. Although organics, nanocrystals, nanowires and other new materials hold significant promise, many opportunities continue to exist for research into unconventional means of exploiting silicon in advanced photovoltaic systems. Here, we describe modules that use large-scale arrays of silicon solar microcells created from bulk wafers and integrated in diverse spatial layouts on foreign substrates by transfer printing. The resulting devices can offer useful features, including high degrees of mechanical flexibility, user-definable transparency and ultrathin-form-factor microconcentrator designs. Detailed studies of the processes for creating and manipulating such microcells, together with theoretical and experimental investigations of the electrical, mechanical and optical characteristics of several types of module that incorporate them, illuminate the key aspects.", "title": "" }, { "docid": "223505549222e4b6e7e46d21e67b5ab2", "text": "We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the best performance in terms of held out perplexity. We also propose a generalization to existing continuous stack models (Joulin & Mikolov, 2015; Grefenstette et al., 2015) to allow a variable number of pop operations more naturally that further improves performance. We further evaluate these language models in terms of their ability to capture non-local syntactic dependencies on a subject-verb agreement dataset (Linzen et al., 2016) and establish new state of the art results using memory augmented language models. Our results demonstrate the value of stack-structured memory for explaining the distribution of words in natural language, in line with linguistic theories claiming a context-free backbone for natural language.", "title": "" }, { "docid": "22f49f2d6e3021516d93d9a96c408dbb", "text": "This paper presents Flower menu, a new type of Marking menu that does not only support straight, but also curved gestures for any of the 8 usual orientations. Flower menus make it possible to put many commands at each menu level and thus to create as large a hierarchy as needed for common applications. Indeed our informal analysis of menu breadth in popular applications shows that a quarter of them have more than 16 items. Flower menus can easily contain 20 items and even more (theoretical maximum of 56 items). Flower menus also support within groups as well as hierarchical groups. They can thus favor breadth organization (within groups) or depth organization (hierarchical groups): as a result, the designers can lay out items in a very flexible way in order to reveal meaningful item groupings. We also investigate the learning performance of the expert mode of Flower menus. A user experiment is presented that compares linear menus (baseline condition), Flower menus and Polygon menus, a variant of Marking menus that supports a breadth of 16 items. Our experiment shows that Flower menus are more efficient than both Polygon and Linear menus for memorizing command activation in expert mode.", "title": "" }, { "docid": "92ac3bfdcf5e554152c4ce2e26b77315", "text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.", "title": "" }, { "docid": "e31901738e78728a7376457f7d1acd26", "text": "Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.", "title": "" }, { "docid": "597a3b52fd5114228d74398756d3359f", "text": "The authors report a meta-analysis of individual differences in detecting deception, confining attention to occasions when people judge strangers' veracity in real-time with no special aids. The authors have developed a statistical technique to correct nominal individual differences for differences introduced by random measurement error. Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others' statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar's credibility than any other individual difference.", "title": "" }, { "docid": "cc8adbaf01e3ab61546fd875724ac270", "text": "This paper presents the image information mining based on a communication channel concept. The feature extraction algorithms encode the image, while an analysis of topic discovery will decode and send its content to the user in the shape of a semantic map. We consider this approach for a real meaning based semantic annotation of very high resolution remote sensing images. The scene content is described using a multi-level hierarchical information representation. Feature hierarchies are discovered considering that higher levels are formed by combining features from lower level. Such a level to level mapping defines our methodology as a deep learning process. The whole analysis can be divided in two major learning steps. The first one regards the Bayesian inference to extract objects and assign basic semantic to the image. The second step models the spatial interactions between the scene objects based on Latent Dirichlet Allocation, performing a high level semantic annotation. We used a WorldView2 image to exemplify the processing results.", "title": "" }, { "docid": "584de328ade02c34e36e2006f3e66332", "text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.", "title": "" }, { "docid": "41cfe93db7c4635e106a1d620ea31036", "text": "Neuroblastoma (NBL) and medulloblastoma (MBL) are tumors of the neuroectoderm that occur in children. NBL and MBL express Trk family tyrosine kinase receptors, which regulate growth, differentiation, and cell death. CEP-751 (KT-6587), an indolocarbazole derivative, is an inhibitor of Trk family tyrosine kinases at nanomolar concentrations. This study was designed to determine the effect of CEP-751 on the growth of NBL and MBL cell lines as xenografts. In vivo studies were conducted on four NBL cell lines (IMR-5, CHP-134, NBL-S, and SY5Y) and three MBL cell lines (D283, D341, and DAOY) using two treatment schedules: (a) treatment was started after the tumors were measurable (therapeutic study); or (b) 4-6 days after inoculation, before tumors were palpable (prevention study). CEP-751 was given at 21 mg/kg/dose administered twice a day, 7 days a week; the carrier vehicle was used as a control. In therapeutic studies, a significant difference in tumor size was seen between treated and control animals with IMR-5 on day 8 (P = 0.01), NBL-S on day 17 (P = 0.016), and CHP-134 on day 15 (P = 0.034). CEP-751 also had a significant growth-inhibitory effect on the MBL line D283 (on day 39, P = 0.031). Inhibition of tumor growth of D341 did not reach statistical significance, and no inhibition was apparent with DAOY. In prevention studies, CEP-751 showed a modest growth-inhibitory effect on IMR5 (P = 0.062) and CHP-134 (P = 0.049). Furthermore, inhibition of growth was greater in the SY5Y cell line transfected with TrkB compared with the untransfected parent cell line expressing no detectable TrkB. Terminal deoxynucleotidyl transferase-mediated nick end labeling studies showed CEP-751 induced apoptosis in the treated CHP-134 tumors, whereas no evidence of apoptosis was seen in the control tumors. Finally, there was no apparent toxicity identified in any of the treated mice. These results suggest that CEP-751 may be a useful therapeutic agent for NBL or MBL.", "title": "" }, { "docid": "7c3457a5ca761b501054e76965b41327", "text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.", "title": "" }, { "docid": "04f4058d37a33245abf8ed9acd0af35d", "text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.", "title": "" }, { "docid": "497d6e0bf6f582924745c7aa192579e7", "text": "The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.", "title": "" }, { "docid": "26e423810e3658cc1c2dcbc682c3512c", "text": "Recent years have witnessed the increasing threat of phishing attacks on mobile platforms. In fact, mobile phishing is more dangerous due to the limitations of mobile phones and mobile user habits. Existing schemes designed for phishing attacks on computers/laptops cannot effectively address phishing attacks on mobile devices. This paper presents MobiFish, a novel automated lightweight anti-phishing scheme for mobile platforms. MobiFish verifies the validity of web pages and applications (Apps) by comparing the actual identity to the identity claimed by the web pages and Apps. MobiFish has been implemented on the Nexus 4 smartphone running the Android 4.2 operating system. We experimentally evaluate the performance of MobiFish with 100 phishing URLs and corresponding legitimate URLs, as well as fake Facebook Apps. The result shows that MobiFish is very effective in detecting phishing attacks on mobile phones.", "title": "" }, { "docid": "476bd671b982450d6d1f6c8d7936bcb5", "text": "Walter Thiel developed the method that enables preservation of the body with natural colors in 1992. It consists in the application of an intravascular injection formula, and maintaining the corps submerged for a determinate period of time in the immersion solution in the pool. After immersion, it is possible to maintain the corps in a hermetically sealed container, thus avoiding dehydration outside the pool. The aim of this work was to review the Thiel method, searching all scientific articles describing this technique from its development point of view, and application in anatomy and morphology teaching, as well as in clinical and su rgic l practice. Most of these studies were carried out in Europe. We used PubMed, Ebsco and Embase databases with the terms “Thiel cadaver”, “Thiel embalming”, “Thiel embalming method” and we searched for papers that cited Thiel`s work. In comparison with methods commonly used with high concentrations of formaldehyde, this method lacks the emanation of noxious or irritating gases; gives the corps important passive joint mobility without stiffness; maintaining color, flexibility and tissue plasticity at a level e quivalent to that of a living body. Furthermore, it allows vascular repletion at the capillary level. All this makes for great advantage over the f rmalinfixed and fresh material. Its multiple uses are applicable in anatomy teaching and research; teaching for undergraduates (prose ction and dissection) and for training in surgical techniques for graduates and specialists (laparoscopies, arthroscopies, endoscopies).", "title": "" }, { "docid": "2cb78c31d07fc14b6088515a1b3c2b45", "text": "A dual-band circularly polarized antenna fed by four apertures that covers the bands of GPS (L1, L2, L5), Galileo (E5a, E5b, E1, E2, L1), and GLONASS (L1, L3) is introduced. A lotus-shaped aperture is added to optimize the coupling between the microstrip lines and the rings. Three wideband planar baluns are used to achieve good axial ratio (lower than 2.1 dB in both bands) and VSWR (41.2%). The measured results of the annular-ring microstrip antenna show good performance of a dual-band operation, and they confirm the validity of this design, which meets the requirement of Global Navigation Satellite System (GNSS) applications.", "title": "" }, { "docid": "375ab5445e81c7982802bdb8b9cbd717", "text": "Advances in healthcare have led to longer life expectancy and an aging population. The cost of caring for the elderly is rising progressively and threatens the economic well-being of many nations around the world. Instead of professional nursing facilities, many elderly people prefer living independently in their own homes. To enable the aging to remain active, this research explores the roles of technology in improving their quality of life while reducing the cost of healthcare to the elderly population. In particular, we propose a multi-agent service framework, called Context-Aware Service Integration System (CASIS), to integrate applications and services. This paper demonstrates several context-aware service scenarios these have been developed on the proposed framework to demonstrate how context technologies and mobile web services can help enhance the quality of care for an elder’s daily", "title": "" }, { "docid": "4446ec55b23ae88192764cffd519afd3", "text": "We present Inferential Power Analysis (IPA), a new class of attacks based on power analysis. An IPA attack has two stages: a profiling stage and a key extraction stage. In the profiling stage, intratrace differencing, averaging, and other statistical operations are performed on a large number of power traces to learn details of the implementation, leading to the location and identification of key bits. In the key extraction stage, the key is obtained from a very few power traces; we have successfully extracted keys from a single trace. Compared to differential power analysis, IPA has the advantages that the attacker does not need either plaintext or ciphertext, and that, in the key extraction stage, a key can be obtained from a small number of traces.", "title": "" } ]
scidocsrr
8277fdf8534c181364996aceb7fbdcda
Bidirectional Long Short-Term Memory Variational Autoencoder
[ { "docid": "e10dbbc6b3381f535ff84a954fcc7c94", "text": "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of Shotton et al. [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skeletal representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skeletal representation lies in the Lie group SE(3)×.. .×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skeletal representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "title": "" }, { "docid": "8c70f1af7d3132ca31b0cf603b7c5939", "text": "Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms.", "title": "" }, { "docid": "1d6e23fedc5fa51b5125b984e4741529", "text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.", "title": "" }, { "docid": "695af0109c538ca04acff8600d6604d4", "text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "title": "" } ]
[ { "docid": "aa74720aa2d191b9eb25104ee3a33b1e", "text": "We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitable subset of the captured frames for further processing, reducing the dataset size by several orders of magnitude. A camera calibration step is applied to recover the camera response function, the absolute camera orientation, and to compute the light directions for each image. Finally, we describe a new photometric stereo technique for non-Lambertian scenes and unknown light source intensities to recover normal maps and spatially varying materials of the scene.", "title": "" }, { "docid": "b3e1bdd7cfca17782bde698297e191ab", "text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.", "title": "" }, { "docid": "e751fdbc980c36b95c81f0f865bb5033", "text": "In order to match shoppers with desired products and provide personalized promotions, whether in online or offline shopping worlds, it is critical to model both consumer preferences and price sensitivities simultaneously. Personalized preferences have been thoroughly studied in the field of recommender systems, though price (and price sensitivity) has received relatively little attention. At the same time, price sensitivity has been richly explored in the area of economics, though typically not in the context of developing scalable, working systems to generate recommendations. In this study, we seek to bridge the gap between large-scale recommender systems and established consumer theories from economics, and propose a nested feature-based matrix factorization framework to model both preferences and price sensitivities. Quantitative and qualitative results indicate the proposed personalized, interpretable and scalable framework is capable of providing satisfying recommendations (on two datasets of grocery transactions) and can be applied to obtain economic insights into consumer behavior.", "title": "" }, { "docid": "ee169784b96c5d1cf77d1119f0c55964", "text": "The increasing amount of machinereadable data available in the context of the Semantic Web creates a need for methods that transform such data into human-comprehensible text. In this paper we develop and evaluate a Natural Language Generation (NLG) system that converts RDF data into natural language text based on an ontology and an associated ontology lexicon. While it follows a classical NLG pipeline, it diverges from most current NLG systems in that it exploits an ontology lexicon in order to capture context-specific lexicalisations of ontology concepts, and combines the use of such a lexicon with the choice of lexical items and syntactic structures based on statistical information extracted from a domain-specific corpus. We apply the developed approach to the cooking domain, providing both an ontology and an ontology lexicon in lemon format. Finally, we evaluate fluency and adequacy of the generated recipes with respect to two target audiences: cooking novices and advanced cooks.", "title": "" }, { "docid": "b6b553e952dd3ccc79832a6cc4752885", "text": "OBJECTIVE\nThe aim of the present study was to analyze the soft tissue barrier formed to implant abutments made of different materials.\n\n\nMATERIAL AND METHODS\nSix Labrador dogs, about 1 year old, were used. All mandibular premolars and the first, second and third maxillary premolars were extracted. Three months later four implants (OsseoSpeed, 4.5 x 9 mm, Astra Tech Dental, Mölndal, Sweden) were placed in the edentulous premolar region on one side of the mandible and healing abutments were connected. One month later, the healing abutments were disconnected and four new abutments were placed in a randomized order. Two of the abutments were made of titanium (Ti), while the remaining abutments were made of ZrO(2) or AuPt-alloy. A 5-months plaque control program was initiated. Three months after implant surgery, the implant installation procedure and the subsequent abutment shift were repeated in the contra-lateral mandibular region. Two months later, the dogs were euthanized and biopsies containing the implant and the surrounding soft and hard peri-implant tissues were collected and prepared for histological analysis.\n\n\nRESULTS\nIt was demonstrated that the soft tissue dimensions at Ti- and ZrO(2) abutments remained stable between 2 and 5 months of healing. At Au/Pt-alloy abutment sites, however, an apical shift of the barrier epithelium and the marginal bone occurred between 2 and 5 months of healing. In addition, the 80-mum-wide connective tissue zone lateral to the Au/Pt-alloy abutments contained lower amounts of collagen and fibroblasts and larger fractions of leukocytes than the corresponding connective tissue zone of abutments made of Ti and ZrO(2).\n\n\nCONCLUSION\nIt is suggested that the soft tissue healing to abutments made of titanium and ZrO(2) is different to that at abutments made of AuPt-alloy.", "title": "" }, { "docid": "09dfc388fc9eec17c2ec9dd5002af8c3", "text": "Having effective visualizations of filesystem provenance data is valuable for understanding its complex hierarchical structure. The most common visual representation of provenance data is the node-link diagram. While effective for understanding local activity, the node-link diagram fails to offer a high-level summary of activity and inter-relationships within the data. We present a new tool, InProv, which displays filesystem provenance with an interactive radial-based tree layout. The tool also utilizes a new time-based hierarchical node grouping method for filesystem provenance data we developed to match the user's mental model and make data exploration more intuitive. We compared InProv to a conventional node-link based tool, Orbiter, in a quantitative evaluation with real users of filesystem provenance data including provenance data experts, IT professionals, and computational scientists. We also compared in the evaluation our new node grouping method to a conventional method. The results demonstrate that InProv results in higher accuracy in identifying system activity than Orbiter with large complex data sets. The results also show that our new time-based hierarchical node grouping method improves performance in both tools, and participants found both tools significantly easier to use with the new time-based node grouping method. Subjective measures show that participants found InProv to require less mental activity, less physical activity, less work, and is less stressful to use. Our study also reveals one of the first cases of gender differences in visualization; both genders had comparable performance with InProv, but women had a significantly lower average accuracy (56%) compared to men (70%) with Orbiter.", "title": "" }, { "docid": "3ff06c4ecf9b8619150c29c9c9a940b9", "text": "It has recently been shown that only a small number of samples from a low-rank matrix are necessary to reconstruct the entire matrix. We bring this to bear on computer vision problems that utilize low-dimensional subspaces, demonstrating that subsampling can improve computation speed while still allowing for accurate subspace learning. We present GRASTA, Grassmannian Robust Adaptive Subspace Tracking Algorithm, an online algorithm for robust subspace estimation from randomly subsampled data. We consider the specific application of background and foreground separation in video, and we assess GRASTA on separation accuracy and computation time. In one benchmark video example [16], GRASTA achieves a separation rate of 46.3 frames per second, even when run in MATLAB on a personal laptop.", "title": "" }, { "docid": "3754b5c86e0032382f144ded5f1ca4d8", "text": "Use and users have an important and acknowledged role to most designers of interactive systems. Nevertheless any touch of user hands does not in itself secure development of meaningful artifacts. In this article we stress the need for a professional PD practice in order to yield the full potentiality of user involvement. We suggest two constituting elements of such a professional PD practice. The existence of a shared 'where-to' and 'why' artifact and an ongoing reflection and off-loop reflection among practitioners in the PD process.", "title": "" }, { "docid": "4309fd090591a107bce978d61aff6a34", "text": "Regular exercise training is recognized as a powerful tool to improve work capacity, endothelial function and the cardiovascular risk profile in obesity, but it is unknown which of high-intensity aerobic exercise, moderate-intensity aerobic exercise or strength training is the optimal mode of exercise. In the present study, a total of 40 subjects were randomized to high-intensity interval aerobic training, continuous moderate-intensity aerobic training or maximal strength training programmes for 12 weeks, three times/week. The high-intensity group performed aerobic interval walking/running at 85-95% of maximal heart rate, whereas the moderate-intensity group exercised continuously at 60-70% of maximal heart rate; protocols were isocaloric. The strength training group performed 'high-intensity' leg press, abdominal and back strength training. Maximal oxygen uptake and endothelial function improved in all groups; the greatest improvement was observed after high-intensity training, and an equal improvement was observed after moderate-intensity aerobic training and strength training. High-intensity aerobic training and strength training were associated with increased PGC-1alpha (peroxisome-proliferator-activated receptor gamma co-activator 1alpha) levels and improved Ca(2+) transport in the skeletal muscle, whereas only strength training improved antioxidant status. Both strength training and moderate-intensity aerobic training decreased oxidized LDL (low-density lipoprotein) levels. Only aerobic training decreased body weight and diastolic blood pressure. In conclusion, high-intensity aerobic interval training was better than moderate-intensity aerobic training in improving aerobic work capacity and endothelial function. An important contribution towards improved aerobic work capacity, endothelial function and cardiovascular health originates from strength training, which may serve as a substitute when whole-body aerobic exercise is contra-indicated or difficult to perform.", "title": "" }, { "docid": "cf639e8a3037d94d2e110a2a11411dc6", "text": "Memory-based collaborative filtering (CF) has been studied extensively in the literature and has proven to be successful in various types of personalized recommender systems. In this paper, we develop a probabilistic framework for memory-based CF (PMCF). While this framework has clear links with classical memory-based CF, it allows us to find principled solutions to known problems of CF-based recommender systems. In particular, we show that a probabilistic active learning method can be used to actively query the user, thereby solving the \"new user problem.\" Furthermore, the probabilistic framework allows us to reduce the computational cost of memory-based CF by working on a carefully selected subset of user profiles, while retaining high accuracy. We report experimental results based on two real-world data sets, which demonstrate that our proposed PMCF framework allows an accurate and efficient prediction of user preferences.", "title": "" }, { "docid": "87c7875416503ab1f12de90a597959a4", "text": "Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.", "title": "" }, { "docid": "cb2c0c4e5454c1302a9569b687a50818", "text": "Employee turnover is a serious concern in knowledge based organizations. When employees leave an organization, they carry with them invaluable tacit knowledge which is often the source of competitive advantage for the business. In order for an organization to continually have a higher competitive advantage over its competition, it should make it a duty to minimize employee attrition. This study identifies employee related attributes that contribute to the prediction of employees’ attrition in organizations. Three hundred and nine (309) complete records of employees of one of the Higher Institutions in Nigeria who worked in and left the institution between 1978 and 2006 were used for the study. The demographic and job related records of the employee were the main data which were used to classify the employee into some predefined attrition classes. Waikato Environment for Knowledge Analysis (WEKA) and See5 for Windows were used to generate decision tree models and rule-sets. The results of the decision tree models and rule-sets generated were then used for developing a a predictive model that was used to predict new cases of employee attrition. A framework for a software tool that can implement the rules generated in this study was also proposed.", "title": "" }, { "docid": "6059b4bbf5d269d0a5f1f596b48c1acb", "text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.", "title": "" }, { "docid": "15208617386aeb77f73ca7c2b7bb2656", "text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.", "title": "" }, { "docid": "fc74dadf88736675c860109a95fcdda1", "text": "This paper presents the preliminary work done towards the development of a Gender Recognition System that can be incorporated into the Hindi Automatic Speech Recognition (ASR) System. Gender Recognition (GR) can help in the development of speaker-independent speech recognition systems. This paper presents a general approach to identifying feature vectors that effectively distinguish gender of a speaker from Hindi phoneme utterances. 10 vowels and 5 nasals of the Hindi language were studied for their effectiveness in identifying gender of the speaker. All the 10 vowel Phonemes performed well, while b] bZ] Å] ,] ,s] vks and vkS showed excellent gender distinction performance. All five nasals 3] ́] .k] u and e which were tested, showed a recognition accuracy of almost 100%. The Mel Frequency Cepstral Coefficients (MFCC) are widely used in ASR. The choice of MFCC as features in Gender Recognition will avoid additional computation. The effect of the MFCC feature vector dimension on the GR accuracy was studied and the findings presented. General Terms Automatic speech recognition in Hindi", "title": "" }, { "docid": "6756ede63355b29d9ca5569dab62db26", "text": "This paper presents an approach for the robust recognition of a complex and dynamic driving environment, such as an urban area, using on-vehicle multi-layer LIDAR. The multi-layer LIDAR alleviates the consequences of occlusion by vertical scanning; it can detect objects with different heights simultaneously, and therefore the influence of occlusion can be curbed. The road environment recognition algorithm proposed in this paper consists of three procedures: ego-motion estimation, construction and updating of a 3-dimensional local grid map, and the detection and tracking of moving objects. The integration of these procedures enables us to estimate ego-motion accurately, along with the positions and states of moving objects, the free area where vehicles and pedestrians can move freely, and the ‘unknown’ area, which have never previously been observed in a road environment.", "title": "" }, { "docid": "ab23f66295574368ccd8fc4e1b166ecc", "text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.", "title": "" }, { "docid": "653e12c8242f5dfc1523fe9e43cec9a6", "text": "The sentiment index of market participants has been extensively used for stock market prediction in recent years. Many financial information vendors also provide it as a service. However, utilizing market sentiment under the asset allocation framework has been rarely discussed. In this article, we investigate the role of market sentiment in an asset allocation problem. We propose to compute sentiment time series from social media with the help of natural language processing techniques. A novel neural network design, built upon an ensemble of evolving clustering and long short-term memory, is used to formalize sentiment information into market views. These views are later integrated into modern portfolio theory through a Bayesian approach. We analyze the performance of this asset allocation model from many aspects, such as stability of portfolios, computing of sentiment time series, and profitability in our simulations. Experimental results show that our model outperforms some of the most successful forecasting techniques. Thanks to the introduction of the evolving clustering method, the estimation accuracy of market views is significantly improved.", "title": "" }, { "docid": "d3c8903fed280246ea7cb473ee87c0e7", "text": "Reaction time has a been a favorite subject of experimental psychologists since the middle of the nineteenth century. However, most studies ask questions about the organization of the brain, so the authors spend a lot of time trying to determine if the results conform to some mathematical model of brain activity. This makes these papers hard to understand for the beginning student. In this review, I have ignored these brain organization questions and summarized the major literature conclusions that are applicable to undergraduate laboratories using my Reaction Time software. I hope this review helps you write a good report on your reaction time experiment. I also apologize to reaction time researchers for omissions and oversimplifications.", "title": "" }, { "docid": "34118709a36ba09a822202753cbff535", "text": "Our healthcare sector daily collects a huge data including clinical examination, vital parameters, investigation reports, treatment follow-up and drug decisions etc. But very unfortunately it is not analyzed and mined in an appropriate way. The Health care industry collects the huge amounts of health care data which unfortunately are not “mined” to discover hidden information for effective decision making for health care practitioners. Data mining refers to using a variety of techniques to identify suggest of information or decision making knowledge in database and extracting these in a way that they can put to use in areas such as decision support , Clustering ,Classification and Prediction. This paper has developed a Computer-Based Clinical Decision Support System for Prediction of Heart Diseases (CCDSS) using Naïve Bayes data mining algorithm. CCDSS can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, spO2,chest pain type, heart rate, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. CCDSS is Webbased, user-friendly, scalable, reliable and expandable. It is implemented on the PHPplatform. Keywords—Computer-Based Clinical Decision Support System(CCDSS), Heart disease, Data mining, Naïve Bayes.", "title": "" } ]
scidocsrr
09451a5858b0da29dd6ea17e4119bffb
Recent advances in techniques for hyperspectral image processing
[ { "docid": "5d247482bb06e837bf04c04582f4bfa2", "text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.", "title": "" } ]
[ { "docid": "177d78352dab39befe562d17d79315b4", "text": "Having access to relevant patient data is crucial for clinical decision making. The data is often documented in unstructured texts and collected in the electronic health record. In this paper, we evaluate an approach to visualize information extracted from clinical documents by means of tag cloud. Tag clouds will be generated using a bag of word approach and by exploiting part of speech tags. For a real word data set comprising radiological reports, pathological reports and surgical operation reports, tag clouds are generated and a questionnaire-based study is conducted as evaluation. Feedback from the physicians shows that the tag cloud visualization is an effective and rapid approach to represent relevant parts of unstructured patient data. To handle the different medical narratives, we have summarized several possible improvements according to the user feedback and evaluation results.", "title": "" }, { "docid": "470093535d4128efa9839905ab2904a5", "text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.", "title": "" }, { "docid": "381a11fe3d56d5850ec69e2e9427e03f", "text": "We present an approximation algorithm that takes a pool of pre-trained models as input and produces from it a cascaded model with similar accuracy but lower average-case cost. Applied to state-of-the-art ImageNet classification models, this yields up to a 2x reduction in floating point multiplications, and up to a 6x reduction in average-case memory I/O. The auto-generated cascades exhibit intuitive properties, such as using lower-resolution input for easier images and requiring higher prediction confidence when using a computationally cheaper model.", "title": "" }, { "docid": "7f5bc34cd08a09014cff1b07c2cf72d0", "text": "This paper presents the RF telecommunications system designed for the New Horizons mission, NASA’s planned mission to Pluto, with focus on new technologies developed to meet mission requirements. These technologies include an advanced digital receiver — a mission-enabler for its low DC power consumption at 2.3 W secondary power. The receiver is one-half of a card-based transceiver that is incorporated with other spacecraft functions into an integrated electronics module, providing further reductions in mass and power. Other developments include extending APL’s long and successful flight history in ultrastable oscillators (USOs) with an updated design for lower DC power. These USOs offer frequency stabilities to 1 part in 10, stabilities necessary to support New Horizons’ uplink radio science experiment. In antennas, the 2.1 meter high gain antenna makes use of shaped suband main reflectors to improve system performance and achieve a gain approaching 44 dBic. New Horizons would also be the first deep-space mission to fly a regenerative ranging system, offering up to a 30 dB performance improvement over sequential ranging, especially at long ranges. The paper will provide an overview of the current system design and development and performance details on the new technologies mentioned above. Other elements of the telecommunications system will also be discussed. Note: New Horizons is NASA’s planned mission to Pluto, and has not been approved for launch. All representations made in this paper are contingent on a decision by NASA to go forward with the preparation for and launch of the mission.", "title": "" }, { "docid": "c21e39d4cf8d3346671ae518357c8edb", "text": "The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.", "title": "" }, { "docid": "3f0d37296258c68a20da61f34364405d", "text": "Need to develop human body's posture supervised robots, gave the push to researchers to think over dexterous design of exoskeleton robots. It requires to develop quantitative techniques to assess motor function and generate the command for the robots to act accordingly with complex human structure. In this paper, we present a new technique for the upper limb power exoskeleton robot in which load is gripped by the human subject and not by the robot while the robot assists. Main challenge is to find non-biological signal based human desired motion intention to assist as needed. For this purpose, we used newly developed Muscle Circumference Sensor (MCS) instead of electromyogram (EMG) sensors. MCS together with the force sensors is used to estimate the human interactive force from which desired human motion is extracted using adaptive Radial Basis Function Neural Network (RBFNN). Developed Upper limb power exoskeleton has seven degrees of freedom (DOF) in which five DOF are passive while two are active. Active joints include shoulder and elbow in Sagittal plane while abduction and adduction motion in shoulder joint is provided by the passive joints. To ensure high quality performance model reference based adaptive impedance controller is employed. Exoskeleton performance is evaluated experimentally by a neurologically intact subject which validates the effectiveness.", "title": "" }, { "docid": "76f60b9e5e894d8bd150a90f6db660a0", "text": "There has been significant progress in recognition of outdoor scenes but indoor scene recognition is still an challenge. This is due to the high appearance fluctuation of indoor situations. With the recent developments in indoor and mobile robotics, identifying the indoor scenes has gained importance. Many approaches have been proposed to detect scenes using object detection and geotags. In contrast, the proposal of this paper uses the convolutional neural network which has gained importance with advancement in machine learning methodologies. Our method has higher efficiency than the existing models as we try to classify the environment as a whole rather than using object identification for the same. We test this approach on our dataset which consists of RGB and also depth images of common locations present in academic environments such as class rooms, labs etc. The proposed approach performs better than previous ones with accuracy up to 98%.", "title": "" }, { "docid": "5a85c72c5b9898b010f047ee99dba133", "text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.", "title": "" }, { "docid": "b6bec5e17f8edae3ccd9df5617dce52e", "text": "This technical report describes CHERI ISAv6, the sixth version of the Capability Hardware Enhanced RISC Instructions (CHERI) Instruction-Set Architecture (ISA)1 being developed by SRI International and the University of Cambridge. This design captures seven years of research, development, experimentation, refinement, formal analysis, and validation through hardware and software implementation. CHERI ISAv6 is a substantial enhancement to prior ISA versions: it introduces support for kernel-mode compartmentalization, jump-based rather than exception-based domain transition, architecture-abstracted and efficient tag restoration, and more efficient generated code. A new chapter addresses potential applications of the CHERI model to the RISC-V and x86-64 ISAs, previously described relative only to the 64-bit MIPS ISA. CHERI ISAv6 better explains our design rationale and research methodology. CHERI is a hybrid capability-system architecture that adds new capability-system primitives to a commodity 64-bit RISC ISA enabling software to efficiently implement fine-grained memory protection and scalable software compartmentalization. Design goals have included incremental adoptability within current ISAs and software stacks, low performance overhead for memory protection, significant performance improvements for software compartmentalization, formal grounding, and programmer-friendly underpinnings. Throughout, we have focused on providing strong and efficient architectural foundations for the principles of least privilege and intentional use in the execution of software at multiple levels of abstraction, preventing and mitigating vulnerabilities. The CHERI system architecture purposefully addresses known performance and robustness gaps in commodity ISAs that hinder the adoption of more secure programming models centered around the principle of least privilege. To this end, CHERI blends traditional paged virtual memory with an in-address-space capability model that includes capability registers, capability instructions, and tagged memory. CHERI builds on C-language fat-pointer literature: its capabilities describe fine-grained regions of memory and can be substituted for data or code pointers in generated code, protecting data and also improving control-flow robustness. Strong capability integrity and monotonicity properties allow the CHERI model to express a variety of protection properties, from enforcing valid C-language pointer provenance and bounds checking to implementing the isolation and controlled communication structures required for software compartmentalization. CHERI’s hybrid capability-system approach, inspired by the Capsicum security model, allows incremental adoption of capability-oriented design: software implementations that are more robust and resilient can be deployed where they are most needed, while leaving less critical software largely unmodified, but nevertheless suitably constrained to be incapable of having adverse effects. Potential deployment scenarios include low-level software Trusted Computing Bases (TCBs) such as separation kernels, hypervisors, and operating-system kernels, as well as userspace TCBs such as language runtimes and web browsers. Likewise, we see early-use scenarios (such as data compression, protocol parsing, and image processing) that relate to particularly high-risk software libraries, which are concentrations of both complex and historically vulnerability-prone code exposed to untrustworthy data sources, while leaving containing applications unchanged. 1We have attempted to avoid confusion among three rather different uses of the word ‘architecture’. The ISA specifies the interface between hardware and software, rather than describing either the (micro-)architecture of a particular hardware prototype, or laying out the total-system hardware-software architecture.", "title": "" }, { "docid": "e2b173a7ca137f2ecc8dd952a004c5c5", "text": "The clinical approach towards the midface is one of the most important interventions for practitioners when treating age-related changes of the face. Currently a plethora of procedures are used and presented. However, few of these approaches have been validated or passed review board assigned evaluations. Therefore, it is the aim of this work to establish a guideline manual for practitioners for a safe and effective mid-face treatment based on the most current concepts of facial anatomy. The latter is based on the 5-layered structural arrangement and its understanding is the key towards the favoured outcome and for minimizing complications.", "title": "" }, { "docid": "711c56cad778337510bcf1629f6293cc", "text": "Media-related commercial marketing aimed at promoting the purchase of products and services by children, and by adults for children, is ubiquitous and has been associated with negative health consequences such as poor nutrition and physical inactivity. But, as Douglas Evans points out, not all marketing in the electronic media is confined to the sale of products. Increasingly savvy social marketers have begun to make extensive use of the same techniques and strategies used by commercial marketers to promote healthful behaviors and to counter some of the negative effects of conventional media marketing to children and adolescents. Evans points out that social marketing campaigns have been effective in helping to prevent and control tobacco use, increase physical activity, improve nutrition, and promote condom use, as well as other positive health behaviors. He reviews the evidence from a number of major recent campaigns and programming in the United States and overseas and describes the evaluation and research methods used to determine their effectiveness. He begins his review of the field of social marketing by describing how it uses many of the strategies practiced so successfully in commercial marketing. He notes the recent development of public health brands and the use of branding as a health promotion strategy. He then goes on to show how social marketing can promote healthful behavior, how it can counter media messages about unhealthful behavior, and how it can encourage discussions between parents and children. Evans concludes by noting some potential future applications to promote healthful media use by children and adolescents and to mitigate the effects of exposure to commercial marketing. These include adapting lessons learned from previous successful campaigns, such as delivering branded messages that promote healthful alternative behaviors. Evans also outlines a message strategy to promote \"smart media use\" to parents, children, and adolescents and suggests a brand based on personal interaction as a desirable alternative to \"virtual interaction\".", "title": "" }, { "docid": "31d2e56c01f53c25c6c9bfcabe21fcbe", "text": "In this paper, we propose a novel computer vision-based fall detection system for monitoring an elderly person in a home care, assistive living application. Initially, a single camera covering the full view of the room environment is used for the video recording of an elderly person's daily activities for a certain time period. The recorded video is then manually segmented into short video clips containing normal postures, which are used to compose the normal dataset. We use the codebook background subtraction technique to extract the human body silhouettes from the video clips in the normal dataset and information from ellipse fitting and shape description, together with position information, is used to provide features to describe the extracted posture silhouettes. The features are collected and an online one class support vector machine (OCSVM) method is applied to find the region in feature space to distinguish normal daily postures and abnormal postures such as falls. The resultant OCSVM model can also be updated by using the online scheme to adapt to new emerging normal postures and certain rules are added to reduce false alarm rate and thereby improve fall detection performance. From the comprehensive experimental evaluations on datasets for 12 people, we confirm that our proposed person-specific fall detection system can achieve excellent fall detection performance with 100% fall detection rate and only 3% false detection rate with the optimally tuned parameters. This work is a semiunsupervised fall detection system from a system perspective because although an unsupervised-type algorithm (OCSVM) is applied, human intervention is needed for segmenting and selecting of video clips containing normal postures. As such, our research represents a step toward a complete unsupervised fall detection system.", "title": "" }, { "docid": "03726ab44d068b69eb361a1603db05b9", "text": "Nowadays, cybercrime is growing rapidly around the world, as new technologies, applications and networks emerge. In addition, the Deep Web has contributed to the growth of illegal activities in cyberspace. As a result, cybercriminals are taking advantage of system vulnerabilities for their own benefit. This article presents the history and conceptualization of cybercrime, explores different categorizations of cybercriminals and cyberattacks, and sets forth our exhaustive cyberattack typology, or taxonomy. Common categories include where the computer is the target to commit the crime, where the computer is used as a tool to perpetrate the felony, or where a digital device is an incidental condition to the execution of a crime. We conclude our study by analyzing lessons learned and future actions that can be undertaken to tackle cybercrime and harden cybersecurity at all levels.", "title": "" }, { "docid": "7130731b6603e4be28e8503c185176f2", "text": "CAViAR is a mobile software system for indoor environments that provides to the mobile user equipped with a smartphone indoor localization, augmented reality (AR), visual interaction, and indoor navigation. These capabilities are possible with the availability of state of the art AR technologies. The mobile application includes additional features, such as indoor maps, shortest path, inertial navigation, places of interest, location sharing and voice-commanded search. CAViAR was tested in a University Campus as one of the technologies to be used later in an intelligent Campus environment.", "title": "" }, { "docid": "b114ba10874b57682ee6a14d3f04d469", "text": "Mobile delay tolerant network (MDTN) is a kind of no stabilized end-to-end connection network and has the characteristics of long time delay and intermittent interruption. To forward a network packet, MDTN relies on the relay nodes, using the “store — carry — forwards” routing method. However, nodes will be selfish and unwilling to forward messages for others due to the limited resources such as energy, storage space and bandwidth. Therefore, it is necessary to bring in incentive mechanism to motivate the selfish nodes to cooperatively forward messages. In this paper, we divide present incentive mechanisms into three categories: reputation-based scheme, tit-for-tat (TFT)-based incentive scheme and credit-based incentive scheme. Then we qualitatively analyze and compare typical incentive mechanisms have been proposed. Finally, we make a conclusion and point out the inadequacies in present incentive mechanisms under MDTN.", "title": "" }, { "docid": "5717c8148c93b18ec0e41580a050bf3a", "text": "Verifiability is one of the core editing principles in Wikipedia, editors being encouraged to provide citations for the added content. For a Wikipedia article, determining the citation span of a citation, i.e. what content is covered by a citation, is important as it helps decide for which content citations are still missing. We are the first to address the problem of determining the citation span in Wikipedia articles. We approach this problem by classifying which textual fragments in an article are covered by a citation. We propose a sequence classification approach where for a paragraph and a citation, we determine the citation span at a finegrained level. We provide a thorough experimental evaluation and compare our approach against baselines adopted from the scientific domain, where we show improvement for all evaluation metrics.", "title": "" }, { "docid": "f282a0e666a2b2f3f323870fc07217bd", "text": "The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.", "title": "" }, { "docid": "605a078c74d37007654094b4b426ece8", "text": "Currently, blockchain technology, which is decentralized and may provide tamper-resistance to recorded data, is experiencing exponential growth in industry and research. In this paper, we propose the MIStore, a blockchain-based medical insurance storage system. Due to blockchain’s the property of tamper-resistance, MIStore may provide a high-credibility to users. In a basic instance of the system, there are a hospital, patient, insurance company and n servers. Specifically, the hospital performs a (t, n)-threshold MIStore protocol among the n servers. For the protocol, any node of the blockchain may join the protocol to be a server if the node and the hospital wish. Patient’s spending data is stored by the hospital in the blockchain and is protected by the n servers. Any t servers may help the insurance company to obtain a sum of a part of the patient’s spending data, which servers can perform homomorphic computations on. However, the n servers cannot learn anything from the patient’s spending data, which recorded in the blockchain, forever as long as more than n − t servers are honest. Besides, because most of verifications are performed by record-nodes and all related data is stored at the blockchain, thus the insurance company, servers and the hospital only need small memory and CPU. Finally, we deploy the MIStore on the Ethererum blockchain and give the corresponding performance evaluation.", "title": "" }, { "docid": "9ba6a2042e99c3ace91f0fc017fa3fdd", "text": "This paper proposes a two-element multi-input multi-output (MIMO) open-slot antenna implemented on the display ground plane of a laptop computer for eight-band long-term evolution/wireless wide-area network operations. The metal surroundings of the antennas have been well integrated as a part of the radiation structure. In the single-element open-slot antenna, the nearby hinge slot (which is bounded by two ground planes and two hinges) is relatively large as compared with the open slot itself and acts as a good radiator. In the MIMO antenna consisting of two open-slot elements, a T slot is embedded in the display ground plane and is connected to the hinge slot. The T and hinge slots when connected behave as a radiator; whereas, the T slot itself functions as an isolation element. With the isolation element, simulated isolations between the two elements of the MIMO antenna are raised from 8.3–11.2 to 15–17.1 dB in 698–960 MHz and from 12.1–21 to 15.9–26.7 dB in 1710–2690 MHz. Measured isolations with the isolation element in the desired low- and high-frequency ranges are 17.6–18.8 and 15.2–23.5 dB, respectively. Measured and simulated efficiencies for the two-element MIMO antenna with either element excited are both larger than 50% in the desired operating frequency bands.", "title": "" }, { "docid": "544591326b250f5d68a64f793d55539b", "text": "Introduction: Exfoliative cheilitis, one of a spectrum of diseases that affect the vermilion border of the lips, is uncommon and has no known cause. It is a chronic superficial inflammatory disorder of the vermilion borders of the lips characterized by persistent scaling; it can be a difficult condition to manage. The diagnosis is now restricted to those few patients whose lesions cannot be attributed to other causes, such as contact sensitization or light. Case Report: We present a 17 year-old male presented to the out clinic in Baghdad with the chief complaint of a persistent scaly on his lower lips. The patient reported that the skin over the lip thickened gradually over a 3 days period and subsequently became loose, causing discomfort. Once he peeled away the loosened layer, a new layer began to form again. Conclusion: The lack of specific treatment makes exfoliative cheilitis a chronic disease that radically affects a person’s life. The aim of this paper is to describe a case of recurrent exfoliative cheilitis successfully treated with intralesional corticosteroids and to present possible hypotheses as to the cause.", "title": "" } ]
scidocsrr
21a8779ba69151f965ce3ee2c0bef2b1
A High-Frequency Three-Level Buck Converter With Real-Time Calibration and Wide Output Range for Fast-DVS
[ { "docid": "e2175b85f438342a84453b5ad36ab4c5", "text": "This paper presents a systematic analysis of integrated 3-level buck converters under both ideal and real conditions as a guidance for designing robust and fast 3-level buck converters. Under ideal conditions, the voltage conversion ratio, the output voltage ripple and, in particular, the system's loop-gain function are derived. Design considerations for real circuitry implementations of an integrated 3-level converter, such as the implementation of the flying capacitor, the impacts of the parasitic capacitors of the flying capacitor and the 4 power switches, and the time mismatch between the 2 duty-cycle signals are thoroughly discussed. Under these conditions, the voltage conversion ratio, the voltage across the flying capacitor and the power efficiency are analyzed and verified with Cadence simulation results. The loop-gain function of an integrated 3-level buck converter with parasitic capacitors and time mismatch is derived with the state-space averaging method. The derived loop-gain functions are verified with time-domain small signal injection simulation and measurement, with a good match between the analytical and experimental results.", "title": "" }, { "docid": "b936c3cd8c64a7b7254e003918fb91d5", "text": "On-chip DC-DC converters have the potential to offer fine-grain power management in modern chip-multiprocessors. This paper presents a fully integrated 3-level DC-DC converter, a hybrid of buck and switched-capacitor converters, implemented in 130 nm CMOS technology. The 3-level converter enables smaller inductors (1 nH) than a buck, while generating a wide range of output voltages compared to a 1/2 mode switched-capacitor converter. The test-chip prototype delivers up to 0.85 A load current while generating output voltages from 0.4 to 1.4 V from a 2.4 V input supply. It achieves 77% peak efficiency at power density of 0.1 W/mm2 and 63% efficiency at maximum power density of 0.3 W/mm2. The converter scales output voltage from 0.4 V to 1.4 V (or vice-versa) within 20 ns at a constant 450 mA load current. A shunt regulator reduces peak-to-peak voltage noise from 0.27 V to 0.19 V under pseudo-randomly fluctuating load currents. Using simulations across a wide range of design parameters, the paper compares conversion efficiencies of the 3-level, buck and switched-capacitor converters.", "title": "" }, { "docid": "abe4b6d122d4d13374d70a886906aba7", "text": "A 100-MHz PWM fully integrated buck converter utilizing standard package bondwire as power inductor with enhanced light-load efficiency which occupies 2.25 mm2 in 0.13-μm CMOS is presented. Standard package bondwire instead of on-chip spiral metal or special spiral bondwire is implemented as power inductor to minimize the cost and the conduction loss of an integrated inductor. The accuracy requirement of bondwire inductance is relaxed by an extra discontinuous-conduction-mode (DCM) calibration loop, which solves the precise DCM operation issue of fully integrated converters and eliminates the reverse current-related loss, thus enabling the use of standard package bondwire inductor with various packaging techniques. Optimizations of the power transistors, the input decoupling capacitor (CI), and the controller are also presented to achieve an efficient and robust high-frequency design. With all three major power losses, conduction loss, switching loss, and reverse current related loss, optimized or eliminated, the efficiency is significantly improved. An efficiency of 74.8% is maintained at 10 mA, and a peak efficiency of 84.7% is measured at nominal operating conditions with a voltage conversion of 1.2 to 0.9 V. Converters with various bondwire inductances from 3 to 8.5 nH are measured to verify the reliability and compatibility of different packaging techniques.", "title": "" } ]
[ { "docid": "b11decd397b775ab7103e747ba67ba19", "text": "Over the last 60 years, the spotlight of research has periodically returned to the cerebellum as new techniques and insights have emerged. Because of its simple homogeneous structure, limited diversity of cell types and characteristic behavioral pathologies, the cerebellum is a natural home for studies of cell specification, patterning, and neuronal migration. However, recent evidence has extended the traditional range of perceived cerebellar function to include modulation of cognitive processes and implicated cerebellar hypoplasia and Purkinje neuron hypo-cellularity with autistic spectrum disorder. In the light of this emerging frontier, we review the key stages and genetic mechanisms behind cerebellum development. In particular, we discuss the role of the midbrain hindbrain isthmic organizer in the development of the cerebellar vermis and the specification and differentiation of Purkinje cells and granule neurons. These developmental processes are then considered in relation to recent insights into selected human developmental cerebellar defects: Joubert syndrome, Dandy-Walker malformation, and pontocerebellar hypoplasia. Finally, we review current research that opens up the possibility of using the mouse as a genetic model to study the role of the cerebellum in cognitive function.", "title": "" }, { "docid": "1fd0f4fd2d63ef3a71f8c56ce6a25fb5", "text": "A new ‘growing’ maximum likelihood classification algorithm for small reservoir delineation has been developed and is tested with Radarsat-2 data for reservoirs in the semi-arid Upper East Region, Ghana. The delineation algorithm is able to find the land-water boundary from SAR imagery for different weather and environmental conditions. As such, the algorithm allows for remote sensed operational monitoring of small reservoirs.", "title": "" }, { "docid": "e82459841d697a538f3ab77817ed45e7", "text": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.", "title": "" }, { "docid": "985c7b11637706e60726cf168790e594", "text": "This Exploratory paper’s second part reveals the detail technological aspects of Hand Gesture Recognition (HGR) System. It further explored HGR basic building blocks, its application areas and challenges it faces. The paper also provides literature review on latest upcoming techniques like – Point Grab, 3D Mouse and Sixth-Sense etc. The paper concluded with focus on major Application fields.", "title": "" }, { "docid": "4405611eafc1f6df4c4fa0b60a50f90d", "text": "Balancing robot which is proposed in this paper is a robot that relies on two wheels in the process of movement. Unlike the other mobile robot which is mechanically stable in its standing position, balancing robot need a balancing control which requires an angle value to be used as tilt feedback. The balancing control will control the robot, so it can maintain its standing position. Beside the balancing control itself, the movement of balancing robot needs its own control in order to control the movement while keeping the robot balanced. Both controllers will be combined since will both of them control the same wheel as the actuator. In this paper we proposed a cascaded PID control algorithm to combine the balancing and movement or distance controller. The movement of the robot is controlled using a distance controller that use rotary encoder sensor to measure its traveled distance. The experiment shows that the robot is able to climb up on 30 degree sloping board. By cascading the distance control to the balancing control, the robot is able to move forward, turning, and reach the desired position by calculating the body's tilt angle.", "title": "" }, { "docid": "d6fbe041eb639e18c3bb9c1ed59d4194", "text": "Based on discrete event-triggered communication scheme (DETCS), this paper is concerned with the satisfactory H ! / H 2 event-triggered fault-tolerant control problem for networked control system (NCS) with α -safety degree and actuator saturation constraint from the perspective of improving satisfaction of fault-tolerant control and saving network resource. Firstly, the closed-loop NCS model with actuator failures and actuator saturation is built based on DETCS; Secondly, based on Lyapunov-Krasovskii function and the definition of α -safety degree given in the paper, a sufficient condition is presented for NCS with the generalized H2 and H! performance, which is the contractively invariant set of fault-tolerance with α -safety degree, and the co-design method for event-triggered parameter and satisfactory faulttolerant controller is also given in this paper. Moreover, the simulation example verifies the feasibility of improving system satisfaction and the effectiveness of saving network resource for the method. Finally, the compatibility analysis of the related indexes is also discussed and analyzed.", "title": "" }, { "docid": "d6b3969a6004b5daf9781c67c2287449", "text": "Lotilaner is a new oral ectoparasiticide from the isoxazoline class developed for the treatment of flea and tick infestations in dogs. It is formulated as pure S-enantiomer in flavoured chewable tablets (Credelio™). The pharmacokinetics of lotilaner were thoroughly determined after intravenous and oral administration and under different feeding regimens in dogs. Twenty-six adult beagle dogs were enrolled in a pharmacokinetic study evaluating either intravenous or oral administration of lotilaner. Following the oral administration of 20 mg/kg, under fed or fasted conditions, or intravenous administration of 3 mg/kg, blood samples were collected up to 35 days after treatment. The effects of timing of offering food and the amount of food consumed prior or after dosing on bioavailability were assessed in a separate study in 25 adult dogs. Lotilaner blood concentrations were measured using a validated liquid chromatography/tandem mass spectrometry (LC-MS/MS) method. Pharmacokinetic parameters were calculated by non-compartmental analysis. In addition, in vivo enantiomer stability was evaluated in an analytical study. Following oral administration in fed animals, lotilaner was readily absorbed and peak blood concentrations reached within 2 hours. The terminal half-life was 30.7 days. Food enhanced the absorption, providing an oral bioavailability above 80% and reduced the inter-individual variability. Moreover, the time of feeding with respect to dosing (fed 30 min prior, fed at dosing or fed 30 min post-dosing) or the reduction of the food ration to one-third of the normal daily ration did not impact bioavailability. Following intravenous administration, lotilaner had a low clearance of 0.18 l/kg/day, large volumes of distribution Vz and Vss of 6.35 and 6.45 l/kg, respectively and a terminal half-life of 24.6 days. In addition, there was no in vivo racemization of lotilaner. The pharmacokinetic properties of lotilaner administered orally as a flavoured chewable tablet (Credelio™) were studied in detail. With a Tmax of 2 h and a terminal half-life of 30.7 days under fed conditions, lotilaner provides a rapid onset of flea and tick killing activity with consistent and sustained efficacy for at least 1 month.", "title": "" }, { "docid": "7678163641a37a02474bd42a48acec16", "text": "Thiopurine S-methyltransferase (TPMT) is involved in the metabolism of thiopurine drugs. Patients that due to genetic variation lack this enzyme or have lower levels than normal, can be adversely affected if normal doses of thiopurines are prescribed. The evidence for measuring TPMT prior to starting patients on thiopurine drug therapy has been reviewed and the various approaches to establishing a service considered. Until recently clinical guidelines on the use of the TPMT varied by medical specialty. This has now changed, with clear guidance encouraging clinicians to use the TPMT test prior to starting any patient on thiopurine therapy. The TPMT test is the first pharmacogenomic test that has crossed from research to routine use. Several analytical approaches can be taken to assess TPMT status. The use of phenotyping supported with genotyping on selected samples has emerged as the analytical model that has enabled national referral services to be developed to a high level in the UK. The National Health Service now has access to cost-effective and timely TPMT assay services, with two laboratories undertaking the majority of the work at national level and with several local services developing. There appears to be adequate capacity and an appropriate internal market to ensure that TPMT assay services are commensurate with the clinical demand.", "title": "" }, { "docid": "00828ab21f8bb19a5621d6964636425e", "text": "Deep neural networks (DNN) have achieved huge practical suc cess in recent years. However, its theoretical properties (in particular genera lization ability) are not yet very clear, since existing error bounds for neural networks cannot be directly used to explain the statistical behaviors of practically adopte d DNN models (which are multi-class in their nature and may contain convolutional l ayers). To tackle the challenge, we derive a new margin bound for DNN in this paper, in which the expected0-1 error of a DNN model is upper bounded by its empirical margin e rror plus a Rademacher Average based capacity term. This new boun d is very general and is consistent with the empirical behaviors of DNN models ob erved in our experiments. According to the new bound, minimizing the emp irical margin error can effectively improve the test performance of DNN. We ther efore propose large margin DNN algorithms, which impose margin penalty terms to the cross entropy loss of DNN, so as to reduce the margin error during the traini ng process. Experimental results show that the proposed algorithms can achiev e s gnificantly smaller empirical margin errors, as well as better test performance s than the standard DNN algorithm.", "title": "" }, { "docid": "11112e1738bd27f41a5b57f07b71292c", "text": "Rotor-cage fault detection in inverter-fed induction machines is still difficult nowadays as the dynamics introduced by the control or load influence the fault-indicator signals commonly applied. In addition, detection is usually possible only when the machine is operated above a specific load level to generate a significant rotor-current magnitude. This paper proposes a new method of detecting rotor-bar defects at zero load and almost at standstill. The method uses the standard current sensors already present in modern industrial inverters and, hence, is noninvasive. It is thus well suited as a start-up test for drives. By applying an excitation with voltage pulses using the switching of the inverter and then measuring the resulting current slope, a new fault indicator is obtained. As a result, it is possible to clearly identify the fault-induced asymmetry in the machine's transient reactances. Although the transient-flux linkage cannot penetrate the rotor because of the cage, the faulty bar locally influences the zigzag flux, leading to a significant change in the transient reactances. Measurement results show the applicability and sensitivity of the proposed method.", "title": "" }, { "docid": "1feb96d640980e53b2d78f49b58a1a07", "text": "The Machine Learning (ML) field has gained its momentum in almost any domain of research and just recently has become a reliable tool in the medical domain. The empirical domain of automatic learning is used in tasks such as medical decision support, medical imaging, protein-protein interaction, extraction of medical knowledge, and for overall patient management care. ML is envisioned as a tool by which computer-based systems can be integrated in the healthcare field in order to get a better, more efficient medical care. This paper describes a ML-based methodology for building an application that is capable of identifying and disseminating healthcare information. It extracts sentences from published medical papers that mention diseases and treatments, and identifies semantic relations that exist between diseases and treatments. Our evaluation results for these tasks show that the proposed methodology obtains reliable outcomes that could be integrated in an application to be used in the medical care domain. The potential value of this paper stands in the ML settings that we propose and in the fact that we outperform previous results on the same data set.", "title": "" }, { "docid": "19ab044ed5154b4051cae54387767c9b", "text": "An approach is presented for minimizing power consumption for digital systems implemented in CMOS which involves optimization at all levels of the design. This optimization includes the technology used to implement the digital circuits, the circuit style and topology, the architecture for implementing the circuits and at the highest level the algorithms that are being implemented. The most important technology consideration is the threshold voltage and its control which allows the reduction of supply voltage without signijcant impact on logic speed. Even further supply reductions can be made by the use of an architecture-based voltage scaling strategy, which uses parallelism and pipelining, to tradeoff silicon area and power reduction. Since energy is only consumed when capacitance is being switched, power can be reduced by minimizing this capacitance through operation reduction, choice of number representation, exploitation of signal correlations, resynchronization to minimize glitching, logic design, circuit design, and physical design. The low-power techniques that are presented have been applied to the design of a chipset for a portable multimedia terminal that supports pen input, speech I/O and fullmotion video. The entire chipset that perjorms protocol conversion, synchronization, error correction, packetization, buffering, video decompression and D/A conversion operates from a 1.1 V supply and consumes less than 5 mW.", "title": "" }, { "docid": "319dcab62b88bd91095768023db79984", "text": "Purpose—The aim of this guideline is to provide a synopsis of best clinical practices in the rehabilitative care of adults recovering from stroke. Methods—Writing group members were nominated by the committee chair on the basis of their previous work in relevant topic areas and were approved by the American Heart Association (AHA) Stroke Council’s Scientific Statement Oversight Committee and the AHA’s Manuscript Oversight Committee. The panel reviewed relevant articles on adults using computerized searches of the medical literature through 2014. The evidence is organized within the context of the AHA framework and is classified according to the joint AHA/American College of Cardiology and supplementary AHA methods of classifying the level of certainty and the class and level of evidence. The document underwent extensive AHA internal and external peer review, Stroke Council Leadership review, and Scientific Statements Oversight Committee review before consideration and approval by the AHA Science Advisory and Coordinating Committee. Results—Stroke rehabilitation requires a sustained and coordinated effort from a large team, including the patient and his or her goals, family and friends, other caregivers (eg, personal care attendants), physicians, nurses, physical and occupational therapists, speech-language pathologists, recreation therapists, psychologists, nutritionists, social workers, and others. Communication and coordination among these team members are paramount in maximizing the effectiveness and efficiency of rehabilitation and underlie this entire guideline. Without communication and coordination, isolated efforts to rehabilitate the stroke survivor are unlikely to achieve their full potential. Guidelines for Adult Stroke Rehabilitation and Recovery A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association", "title": "" }, { "docid": "b508eee12c615b44b8b671790cf77d77", "text": "Many search engine users face problems while retrieving their required Information. For example, a user may find it is difficult to retrieve sufficient relevant information because he use too few keywords to search or the user is inexperienced and do not search using proper keywords and the search engine is not able to receive the user real meaning through his given keywords. Also, due to the recent improvements of search engines and the rapid growth of the web, the search engines return a huge number of web pages, and then the user may take long time to look at all of these pages to find his needed information. The problem of obtaining relevant results in web searching has been tackled by several approaches. Although very effective techniques are currently used by the most popular search engines, but no a priori knowledge on the user&apos;s desires beside the search keywords is available. In this paper, we present an approach for optimizing the search engine results using artificial intelligence techniques such as document clustering and genetic algorithm to provide the user with the most relevant pages to the search query. The proposed method uses the Meta-data that is coming from the user preferences or the search engine query log files. These data is important to find the most related information to the user while searching the web. Finally, the method", "title": "" }, { "docid": "28c19bf17c76a6517b5a7834216cd44d", "text": "The concept of augmented reality audio characterizes techniques where a real sound environment is extended with virtual auditory environments and communications scenarios. A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones. When microphone signals are routed directly to the earphones, a user is exposed to a pseudoacoustic representation of the real environment. Virtual sound events are then mixed with microphone signals to produce a hybrid, an augmented reality audio representation, for the user. An overview of related technology, literature, and application scenarios is provided. Listening test results with a prototype system show that the proposed system has interesting properties. For example, in some cases listeners found it very difficult to determine which sound sources in an augmented reality audio representation are real and which are virtual.", "title": "" }, { "docid": "5c111a5a30f011e4f47fb9e2041644f9", "text": "Since the audio recapture can be used to assist audio splicing, it is important to identify whether a suspected audio recording is recaptured or not. However, few works on such detection have been reported. In this paper, we propose an method to detect the recaptured audio based on deep learning and we investigate two deep learning techniques, i.e., neural network with dropout method and stack auto-encoders (SAE). The waveform samples of audio frame is directly used as the input for the deep neural network. The experimental results show that error rate around 7.5% can be achieved, which indicates that our proposed method can successfully discriminate recaptured audio and original audio.", "title": "" }, { "docid": "718433393201b5521a003df6503fe18b", "text": "The issue of potential data misuse rises whenever it is collected from several sources. In a common setting, a large database is either horizontally or vertically partitioned between multiple entities who want to find global trends from the data. Such tasks can be solved with secure multi-party computation (MPC) techniques. However, practitioners tend to consider such solutions inefficient. Furthermore, there are no established tools for applying secure multi-party computation in real-world applications. In this paper, we describe Sharemind—a toolkit, which allows data mining specialist with no cryptographic expertise to develop data mining algorithms with good security guarantees. We list the building blocks needed to deploy a privacy-preserving data mining application and explain the design decisions that make Sharemind applications efficient in practice. To validate the practical feasibility of our approach, we implemented and benchmarked four algorithms for frequent itemset mining.", "title": "" }, { "docid": "ad1d572a7ee58c92df5d1547fefba1e8", "text": "The primary source for the blood supply of the head of the femur is the deep branch of the medial femoral circumflex artery (MFCA). In posterior approaches to the hip and pelvis the short external rotators are often divided. This can damage the deep branch and interfere with perfusion of the head. We describe the anatomy of the MFCA and its branches based on dissections of 24 cadaver hips after injection of neoprene-latex into the femoral or internal iliac arteries. The course of the deep branch of the MFCA was constant in its extracapsular segment. In all cases there was a trochanteric branch at the proximal border of quadratus femoris spreading on to the lateral aspect of the greater trochanter. This branch marks the level of the tendon of obturator externus, which is crossed posteriorly by the deep branch of the MFCA. As the deep branch travels superiorly, it crosses anterior to the conjoint tendon of gemellus inferior, obturator internus and gemellus superior. It then perforates the joint capsule at the level of gemellus superior. In its intracapsular segment it runs along the posterosuperior aspect of the neck of the femur dividing into two to four subsynovial retinacular vessels. We demonstrated that obturator externus protected the deep branch of the MFCA from being disrupted or stretched during dislocation of the hip in any direction after serial release of all other soft-tissue attachments of the proximal femur, including a complete circumferential capsulotomy. Precise knowledge of the extracapsular anatomy of the MFCA and its surrounding structures will help to avoid iatrogenic avascular necrosis of the head of the femur in reconstructive surgery of the hip and fixation of acetabular fractures through the posterior approach.", "title": "" } ]
scidocsrr
f970e045521e41af22bcb2716fe7a745
Real-time 6-DOF monocular visual SLAM in a large-scale environment
[ { "docid": "182cc1785fdd5b5d33d3253873c97683", "text": "The Perspective-Three-Point (P3P) problem aims at determining the position and orientation of the camera in the world reference frame from three 2D-3D point correspondences. This problem is known to provide up to four solutions that can then be disambiguated using a fourth point. All existing solutions attempt to first solve for the position of the points in the camera reference frame, and then compute the position and orientation of the camera in the world frame, which alignes the two point sets. In contrast, in this paper we propose a novel closed-form solution to the P3P problem, which computes the aligning transformation directly in a single stage, without the intermediate derivation of the points in the camera frame. This is made possible by introducing intermediate camera and world reference frames, and expressing their relative position and orientation using only two parameters. The projection of a world point into the parametrized camera pose then leads to two conditions and finally a quartic equation for finding up to four solutions for the parameter pair. A subsequent backsubstitution directly leads to the corresponding camera poses with respect to the world reference frame. We show that the proposed algorithm offers accuracy and precision comparable to a popular, standard, state-of-the-art approach but at much lower computational cost (15 times faster). Furthermore, it provides improved numerical stability and is less affected by degenerate configurations of the selected world points. The superior computational efficiency is particularly suitable for any RANSAC-outlier-rejection step, which is always recommended before applying PnP or non-linear optimization of the final solution.", "title": "" } ]
[ { "docid": "432ea666011ccf3b2fd0cb1d9eb1baa9", "text": "A fully developed nomology for the study of games requires the development of explanatory theoretical constructs associated with validating observational techniques. Drawing from cognition sciences, a framework is proposed based upon the integration of schema theory with attention theory. Cognitive task analysis provides a foundation for preliminary schema descriptions, which can then be elaborated according to more detailed models of cognitive and attentional processes. The resulting theory provides a rich explanatory framework for the cognitive processes underlying game play, as well as detailed hypotheses for the hierarchical structure of pleasures and rewards motivating players. Game engagement is accounted for as a process of schema selection or development, while immersion is explained in terms of schema execution. This framework is being developed not only to explain the substructures of game play, but also to provide schema models that may inform game design processes and provide detailed criteria for the design of patterns of game features for entertainment, pedagogical and therapeutic purposes.", "title": "" }, { "docid": "f43b34ca1bbb85851672ff55a60f0785", "text": "In this paper, we propose an optimized mutual authentication scheme which can keep most password authentication benefits, meanwhile improve the security property by using encryption primitives. Our proposed scheme not only offers webmasters a reasonable secure client authentication, but also offers good user experience. Security analysis demonstrates that the proposed authentication scheme can achieve the security requirements, and also resist the diverse possible attacks.", "title": "" }, { "docid": "a6690e9d1e0682d7bbfdb5f4397c9b4d", "text": "_______________ Task-based learning is a popular topic in ELT/EFL circles nowadays. It is accepted by its proponents as a flourishing method that may replace Communicative Language Learning. However, it can also be seen as an adventure just because there are almost no experimental studies to tackle questions concerning applicability of Task-based Learning. In this paper we try to find out whether or not task-based writing activities have a positive effect upon reading comprehension in English as a foreign language. An experimental study was conducted in order to scrutinize implications of Task-based Learning. Two groups of 28 students were chosen through random cluster sampling. Both groups were given a pre-test and a post-test. The pre-test and post-test mean scores of the experimental group, which got treatment through task-based writing activities, were compared with those of the control group, which was taught English through traditional methods. The effect of the treatment upon reading comprehension was analyzed through two-way ANOVA. The results provide a theoretical justification for the claims of the proponents of Task-based Learning. Theoretical Background Researchers have been discussing and asserting that the Communicative Language Teaching, a method which has a worldwide use nowadays, has some important drawbacks. Having been based on principles of first language acquisition, it lacks a proper theoretical basis about language learning as a cognitive process of skill acquisition and a clear research about second language acquisition (Klapper, 2003:33-34). It puts much emphasis on ‘communication’, pair work, information-gap activities, and intensive target language use (Pica, 2000; Richards and Rodgers, 1996). However, teachers and practitioners have encountered some problems while applying it. One of the most important problems was the demotivation of students because of intensive target language use. Task-based Learning is a flourishing method which can compensate for the weaknesses of the Communicative Language Teaching mentioned above and which is seen as an alternative to it by researchers (Klapper, 2003:35-36). ‘Task’ is taken as a goal-oriented activity which has a clear purpose and which involves achieving an outcome, creating a final", "title": "" }, { "docid": "cb654fe04058c8c820352136cc7fe1d4", "text": "We describe the systems of NLP-CIC team that participated in the Complex Word Identification (CWI) 2018 shared task. The shared task aimed to benchmark approaches for identifying complex words in English and other languages from the perspective of non-native speakers. Our goal is to compare two approaches: feature engineering and a deep neural network. Both approaches achieved comparable performance on the English test set. We demonstrated the flexibility of the deeplearning approach by using the same deep neural network setup in the Spanish track. Our systems achieved competitive results: all our systems were within 0.01 of the system with the best macro-F1 score on the test sets except on Wikipedia test set, on which our best system is 0.04 below the best macro-F1 score.", "title": "" }, { "docid": "d54c9a54622a6f5814f00d7193f8dc3b", "text": "Internet of Things (IoT) software is required not only to dispose of huge volumes of real-time and heterogeneous data, but also to support different complex applications for business purposes. Using an ontology approach, a Configurable Information Service Platform is proposed for the development of IoT-based application. Based on an abstract information model, information encapsulating, composing, discomposing, transferring, tracing, and interacting in Product Lifecycle Management could be carried out. Combining ontology and representational state transfer (REST)-ful service, the platform provides an information support base both for data integration and intelligent interaction. A case study is given to verify the platform. It is shown that the platform provides a promising way to realize IoT application in semantic level.", "title": "" }, { "docid": "466f4ed7a59f9b922a8b87685d8f3a77", "text": "Ten cases of oral hairy leukoplakia (OHL) in HIV- negative patients are presented. Eight of the 10 patients were on steroid treatment for chronic obstructive pulmonary disease, 1 patient was on prednisone as part of a therapeutic regimen for gastrointestinal stromal tumor, and 1 patient did not have any history of immunosuppression. There were 5 men and 5 women, ages 32-79, with mean age being 61.8 years. Nine out of 10 lesions were located unilaterally on the tongue, whereas 1 lesion was located at the junction of the hard and soft palate. All lesions were described as painless, corrugated, nonremovable white plaques (leukoplakias). Histologic features were consistent with Epstein-Barr virus-associated hyperkeratosis suggestive of OHL, and confirmatory in situ hybridization was performed in all cases. Candida hyphae and spores were present in 8 cases. Pathologists should be aware of OHL presenting not only in HIV-positive and HIV-negative organ transplant recipients but also in patients receiving steroid treatment, and more important, certain histologic features should raise suspicion for such diagnosis without prior knowledge of immunosuppression.", "title": "" }, { "docid": "350d1717a5192873ef9e0ac9ed3efc7b", "text": "OBJECTIVE\nTo describe the effects of percutaneously implanted valve-in-valve in the tricuspid position for patients with pre-existing transvalvular device leads.\n\n\nMETHODS\nIn this case series, we describe implantation of the Melody valve and SAPIEN XT valve within dysfunctional bioprosthetic tricuspid valves in three patients with transvalvular device leads.\n\n\nRESULTS\nIn all cases, the valve was successfully deployed and device lead function remained unchanged. In 1/3 cases with 6-month follow-up, device lead parameters remain unchanged and transcatheter valve-in-valve function remains satisfactory.\n\n\nCONCLUSIONS\nTranscatheter tricuspid valve-in-valve is feasible in patients with pre-existing transvalvular devices leads. Further study is required to determine the long-term clinical implications of this treatment approach.", "title": "" }, { "docid": "519b0dbeb1193a14a06ba212790f49d4", "text": "In recent years, sign language recognition has attracted much attention in computer vision . A sign language is a means of conveying the message by using hand, arm, body, and face to convey thoughts and meanings. Like spoken languages, sign languages emerge and evolve naturally within hearing-impaired communities. However, sign languages are not universal. There is no internationally recognized and standardized sign language for all deaf people. As is the case in spoken language, every country has got its own sign language with high degree of grammatical variations. The sign language used in India is commonly known as Indian Sign Language (henceforth called ISL).", "title": "" }, { "docid": "e644b698d2977a2c767fe86a1445e23c", "text": "This paper describes the E2E data, a new dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.", "title": "" }, { "docid": "fba2cce267a075c24a1378fd55de6113", "text": "This paper presents a novel mixed reality rehabilitation system used to help improve the reaching movements of people who have hemiparesis from stroke. The system provides real-time, multimodal, customizable, and adaptive feedback generated from the movement patterns of the subject's affected arm and torso during reaching to grasp. The feedback is provided via innovative visual and musical forms that present a stimulating, enriched environment in which to train the subjects and promote multimodal sensory-motor integration. A pilot study was conducted to test the system function, adaptation protocol and its feasibility for stroke rehabilitation. Three chronic stroke survivors underwent training using our system for six 75-min sessions over two weeks. After this relatively short time, all three subjects showed significant improvements in the movement parameters that were targeted during training. Improvements included faster and smoother reaches, increased joint coordination and reduced compensatory use of the torso and shoulder. The system was accepted by the subjects and shows promise as a useful tool for physical and occupational therapists to enhance stroke rehabilitation.", "title": "" }, { "docid": "05ab4fa15696ee8b47e017ebbbc83f2c", "text": "Vertically aligned rutile TiO2 nanowire arrays (NWAs) with lengths of ∼44 μm have been successfully synthesized on transparent, conductive fluorine-doped tin oxide (FTO) glass by a facile one-step solvothermal method. The length and wire-to-wire distance of NWAs can be controlled by adjusting the ethanol content in the reaction solution. By employing optimized rutile TiO2 NWAs for dye-sensitized solar cells (DSCs), a remarkable power conversion efficiency (PCE) of 8.9% is achieved. Moreover, in combination with a light-scattering layer, the performance of a rutile TiO2 NWAs based DSC can be further enhanced, reaching an impressive PCE of 9.6%, which is the highest efficiency for rutile TiO2 NWA based DSCs so far.", "title": "" }, { "docid": "e0a314eb1fe221791bc08094d0c04862", "text": "The present study was undertaken with the objective to explore the influence of the five personality dimensions on the information seeking behaviour of the students in higher educational institutions. Information seeking behaviour is defined as the sum total of all those activities that are usually undertaken by the students of higher education to collect, utilize and process any kind of information needed for their studies. Data has been collected from 600 university students of the three broad disciplines of studies from the Universities of Eastern part of India (West Bengal). The tools used for the study were General Information schedule (GIS), Information Seeking Behaviour Inventory (ISBI) and NEO-FFI Personality Inventory. Product moment correlation has been worked out between the scores in ISBI and those in NEO-FFI Personality Inventory. The findings indicated that the five personality traits are significantly correlated to all the dimensions of information seeking behaviour of the university students.", "title": "" }, { "docid": "6d83a242e4e0a0bd0d65c239e0d6777f", "text": "Traditional clustering algorithms consider all of the dimensions of an input data set equally. However, in the high dimensional data, a common property is that data points are highly clustered in subspaces, which means classes of objects are categorized in subspaces rather than the entire space. Subspace clustering is an extension of traditional clustering that seeks to find clusters in different subspaces categorical data and its corresponding time complexity is analyzed as well. In the proposed algorithm, an additional step is added to the k-modes clustering process to automatically compute the weight of all dimensions in each cluster by using complement entropy. Furthermore, the attribute weight can be used to identify the subsets of important dimensions that categorize different clusters. The effectiveness of the proposed algorithm is demonstrated with real data sets and synthetic data sets. & 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b9efdf790c52c63a589719ad58b0e647", "text": "This paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning.", "title": "" }, { "docid": "2828aa692e439502de5c950df01701ab", "text": "The Internet of Things (IoT) was of a vision in which all physical objects are tagged and uniquely identified using RFID transponders or readers. Nowadays, research into the IoT has extended this vision to the connectivity of Things to anything, anyone, anywhere and at anytime. The IoT has grown into multiple dimensions, which encompasses various networks of applications, computers, devices, as well as physical and virtual objects, referred to as things or objects, that are interconnected together using communication technologies such as, wireless, wired and mobile networks, RFID, Bluetooth, GPS systems, and other evolving technologies. This paradigm is a major shift from an essentially computer-based network model to a fully distributed network of smart objects. This change poses serious challenges in terms of architecture, connectivity, efficiency, security and provision of services among many others. This paper studies the state-of-the art of the IoT. In addition, some major security and privacy issues are described and a new attack vector is introduced, referred to as the “automated invasion attack”.", "title": "" }, { "docid": "08134d0d76acf866a71d660062f2aeb8", "text": "Colorization methods using deep neural networks have become a recent trend. However, most of them do not allow user inputs, or only allow limited user inputs (only global inputs or only local inputs), to control the output colorful images. The possible reason is that it’s difficult to differentiate the influence of different kind of user inputs in network training. To solve this problem, we present a novel deep colorization method, which allows simultaneous global and local inputs to better control the output colorized images. The key step is to design an appropriate loss function that can differentiate the influence of input data, global inputs and local inputs. With this design, our method accepts no inputs, or global inputs, or local inputs, or both global and local inputs, which is not supported in previous deep colorization methods. In addition, we propose a global color theme recommendation system to help users determine global inputs. Experimental results shows that our methods can better control the colorized images and generate state-of-art results.", "title": "" }, { "docid": "196fb4c83bf2a0598869698d56a6e1da", "text": "Mammals adapted to a great variety of habitats with different accessibility to water. In addition to changes in kidney morphology, e.g. the length of the loops of Henle, several hormone systems are involved in adaptation to limited water supply, among them the renal-neurohypophysial vasopressin/vasopressin receptor system. Comparison of over 80 mammalian V2 vasopressin receptor (V2R) orthologs revealed high structural and functional conservation of this key component involved in renal water reabsorption. Although many mammalian species have unlimited access to water there is no evidence for complete loss of V2R function indicating an essential role of V2R activity for survival even of those species. In contrast, several marsupial V2R orthologs show a significant increase in basal receptor activity. An increased vasopressin-independent V2R activity can be interpreted as a shift in the set point of the renal-neurohypophysial hormone circuit to realize sufficient water reabsorption already at low hormone levels. As found in other desert mammals arid-adapted marsupials show high urine osmolalities. The gain of basal V2R function in several marsupials may contribute to the increased urine concentration abilities and, therefore, provide an advantage to maintain water and electrolyte homeostasis under limited water supply conditions.", "title": "" }, { "docid": "7a77d8d381ec543033626be54119358a", "text": "The advent of continuous glucose monitoring (CGM) is a significant stride forward in our ability to better understand the glycemic status of our patients. Current clinical practice employs two forms of CGM: professional (retrospective or \"masked\") and personal (real-time) to evaluate and/or monitor glycemic control. Most studies using professional and personal CGM have been done in those with type 1 diabetes (T1D). However, this technology is agnostic to the type of diabetes and can also be used in those with type 2 diabetes (T2D). The value of professional CGM in T2D for physicians, patients, and researchers is derived from its ability to: (1) to discover previously unknown hyper- and hypoglycemia (silent and symptomatic); (2) measure glycemic control directly rather than through the surrogate metric of hemoglobin A1C (HbA1C) permitting the observation of a wide variety of metrics that include glycemic variability, the percent of time within, below and above target glucose levels, the severity of hypo- and hyperglycemia throughout the day and night; (3) provide actionable information for healthcare providers derived by the CGM report; (4) better manage patients on hemodialysis; and (5) effectively and efficiently analyze glycemic effects of new interventions whether they be pharmaceuticals (duration of action, pharmacodynamics, safety, and efficacy), devices, or psycho-educational. Personal CGM has also been successfully used in a small number of studies as a behavior modification tool in those with T2D. This comprehensive review describes the differences between professional and personal CGM and the evidence for the use of each form of CGM in T2D. Finally, the opinions of key professional societies on the use of CGM in T2D are presented.", "title": "" }, { "docid": "52a4a964d408d6e66d6864d573ee800f", "text": "Toxoplasma gondii causes fatal multisystemic disease in New World primates, with respiratory failure and multifocal necrotic lesions. Although cases and outbreaks of toxoplasmosis have been described, there are few genotyping studies and none has included parasite load quantification. In this article, we describe two cases of lethal acute toxoplasmosis in squirrel monkeys (Saimiri sciureus) of Mexico city. The main pathological findings included pulmonary edema, interstitial pneumonia, hepatitis and necrotizing lymphadenitis, and structures similar to T. gondii tachyzoites observed by histopathology in these organs. Diagnosis was confirmed by immunohistochemistry, transmission electron microscopy and both end point and real time PCR. The load was between <14 and 23 parasites/mg tissue. Digestion of the SAG3 gene amplicon showed similar bands to type I reference strains. These are the first cases of toxoplasmosis in primates studied in Mexico, with clinical features similar to others reported in Israel and French Guiana, although apparently caused by a different T. gondii variant.", "title": "" }, { "docid": "ab7184c576396a1da32c92093d606a53", "text": "Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.", "title": "" } ]
scidocsrr
3b24192d415527372dca571d0fe4c230
Shadow Suppression using RGB and HSV Color Space in Moving Object Detection
[ { "docid": "af752d0de962449acd9a22608bd7baba", "text": "Ї R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. ‡ R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. ‡ R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. ‡ R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.", "title": "" } ]
[ { "docid": "ced13f6c3e904f5bd833e2f2621ae5e2", "text": "A growing amount of research focuses on learning in group settings and more specifically on learning in computersupported collaborative learning (CSCL) settings. Studies on western students indicate that online collaboration enhances student learning achievement; however, few empirical studies have examined student satisfaction, performance, and knowledge construction through online collaboration from a cross-cultural perspective. This study examines satisfaction, performance, and knowledge construction via online group discussions of students in two different cultural contexts. Students were both first-year university students majoring in educational sciences at a Flemish university and a Chinese university. Differences and similarities of the two groups of students with regard to satisfaction, learning process, and achievement were analyzed.", "title": "" }, { "docid": "6bbed2c899db4439ba1f31004e15a040", "text": "Compiler-component generators, such as lexical analyzer generators and parser generators, have long been used to facilitate the construction of compilers. A tree-manipulation language called twig has been developed to help construct efficient code generators. Twig transforms a tree-translation scheme into a code generator that combines a fast top-down tree-pattern matching algorithm with dynamic programming. Twig has been used to specify and construct code generators for several experimental compilers targeted for different machines.", "title": "" }, { "docid": "ac8baab85f1c66b3aa74426e3b8fce14", "text": "OBJECTIVE\nTo evaluate a web-based contingency management program (CM) and a phone-delivered cessation counseling program (Smoking Cessation for Healthy Births [SCHB]) with pregnant smokers in rural Appalachia who were ≤12 weeks gestation at enrollment.\n\n\nDESIGN\nTwo group randomized design.\n\n\nSETTING\nHome-based cessation programs in rural Appalachia Ohio and Kentucky.\n\n\nPARTICIPANTS\nA community sample of pregnant smokers (N = 17).\n\n\nMETHODS\nParticipants completed demographic and smoking-related questionnaires and were assigned to CM (n = 7) or SCHB (n = 10) conditions. Smoking status was assessed monthly using breath carbon monoxide and urinary cotinine.\n\n\nRESULTS\nFor CM, two of seven (28.57%) of the participants achieved abstinence, and three of 10 (30%) of those enrolled in SCHB were abstinent by late in pregnancy. Participants in CM attained abstinence more rapidly than those in SCHB. However, those in SCHB experienced less relapse to smoking, and a greater percentage of these participants reduced their smoking by at least 50%.\n\n\nCONCLUSION\nBased on this initial evaluation, the web-based CM and SCHB programs appeared to be feasible for use with rural pregnant smokers with acceptable program adherence for both approaches. Future researchers could explore combining these programs to capitalize on the strengths of each, for example, rapid smoking cessation based on CM incentives and better sustained cessation or reductions in smoking facilitated by the counseling support of SCHB.", "title": "" }, { "docid": "a3e6d006a56913285d1eb6f0a8e1ce55", "text": "This paper updates and builds on ‘Modelling with Stakeholders’ Voinov and Bousquet, 2010 which demonstrated the importance of, and demand for, stakeholder participation in resource and environmental modelling. This position paper returns to the concepts of that publication and reviews the progress made since 2010. A new development is the wide introduction and acceptance of social media and web applications, which dramatically changes the context and scale of stakeholder interactions and participation. Technology advances make it easier to incorporate information in interactive formats via visualization and games to augment participatory experiences. Citizens as stakeholders are increasingly demanding to be engaged in planning decisions that affect them and their communities, at scales from local to global. How people interact with and access models and data is rapidly evolving. In turn, this requires changes in how models are built, packaged, and disseminated: citizens are less in awe of experts and external authorities, and they are increasingly aware of their own capabilities to provide inputs to planning processes, including models. The continued acceleration of environmental degradation and natural resource depletion accompanies these societal changes, even as there is a growing acceptance of the need to transition to alternative, possibly very different, life styles. Substantive transitions cannot occur without significant changes in human behaviour and perceptions. The important and diverse roles that models can play in guiding human behaviour, and in disseminating and increasing societal knowledge, are a feature of stakeholder processes today. © 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d001d61e90dd38eb0eab0c8d4af9d2a6", "text": "Wireless LANs, especially WiFi, have been pervasively deployed and have fostered myriad wireless communication services and ubiquitous computing applications. A primary concern in designing each scenario-tailored application is to combat harsh indoor propagation environments, particularly Non-Line-Of-Sight (NLOS) propagation. The ability to distinguish Line-Of-Sight (LOS) path from NLOS paths acts as a key enabler for adaptive communication, cognitive radios, robust localization, etc. Enabling such capability on commodity WiFi infrastructure, however, is prohibitive due to the coarse multipath resolution with mere MAC layer RSSI. In this work, we dive into the PHY layer and strive to eliminate irrelevant noise and NLOS paths with long delays from the multipath channel responses. To further break away from the intrinsic bandwidth limit of WiFi, we extend to the spatial domain and harness natural mobility to magnify the randomness of NLOS paths while retaining the deterministic nature of the LOS component. We prototype LiFi, a statistical LOS identification scheme for commodity WiFi infrastructure and evaluate it in typical indoor environments covering an area of 1500 m2. Experimental results demonstrate an overall LOS identification rate of 90.4% with a false alarm rate of 9.3%.", "title": "" }, { "docid": "633c906446a11252c3ab9e0aad20189c", "text": "The term \" gamification \" is generally used to denote the application of game mechanisms in non‐gaming environments with the aim of enhancing the processes enacted and the experience of those involved. In recent years, gamification has become a catchword throughout the fields of education and training, thanks to its perceived potential to make learning more motivating and engaging. This paper is an attempt to shed light on the emergence and consolidation of gamification in education/training. It reports the results of a literature review that collected and analysed around 120 papers on the topic published between 2011 and 2014. These originate from different countries and deal with gamification both in training contexts and in formal educational, from primary school to higher education. The collected papers were analysed and classified according to various criteria, including target population, type of research (theoretical vs experimental), kind of educational contents delivered, and the tools deployed. The results that emerge from this study point to the increasing popularity of gamification techniques applied in a wide range of educational settings. At the same time, it appears that over the last few years the concept of gamification has become more clearly defined in the minds of researchers and practitioners. Indeed, until fairly recently the term was used by many to denote the adoption of game artefacts (especially digital ones) as educational tools for learning a specific subject such as algebra. In other words, it was used as a synonym of Game Based Learning (GBL) rather than to identify an educational strategy informing the overall learning process, which is treated globally as a game or competition. However, this terminological confusion appears only in a few isolated cases in this literature review, suggesting that a certain level of taxonomic and epistemological convergence is underway.", "title": "" }, { "docid": "66df2a7148d67ffd3aac5fc91e09ee5d", "text": "Tree boosting, which combines weak learners (typically decision trees) to generate a strong learner, is a highly effective and widely used machine learning method. However, the development of a high performance tree boosting model is a time-consuming process that requires numerous trial-and-error experiments. To tackle this issue, we have developed a visual diagnosis tool, BOOSTVis, to help experts quickly analyze and diagnose the training process of tree boosting. In particular, we have designed a temporal confusion matrix visualization, and combined it with a t-SNE projection and a tree visualization. These visualization components work together to provide a comprehensive overview of a tree boosting model, and enable an effective diagnosis of an unsatisfactory training process. Two case studies that were conducted on the Otto Group Product Classification Challenge dataset demonstrate that BOOSTVis can provide informative feedback and guidance to improve understanding and diagnosis of tree boosting algorithms.", "title": "" }, { "docid": "61f0e20762a8ce5c3c40ea200a32dd43", "text": "Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general. Keywords—elearning; gamification; marketing; monetization; viral marketing; virality", "title": "" }, { "docid": "353bbc5e68ec1d53b3cd0f7c352ee699", "text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "5118d816cb2ede5fa19875cbd50cc7d8", "text": "PURPOSE\nTo review the concepts of reliability and validity, provide examples of how the concepts have been used in nursing research, provide guidance for improving the psychometric soundness of instruments, and report suggestions from editors of nursing journals for incorporating psychometric data into manuscripts.\n\n\nMETHODS\nCINAHL, MEDLINE, and PsycINFO databases were searched using key words: validity, reliability, and psychometrics. Nursing research articles were eligible for inclusion if they were published in the last 5 years, quantitative methods were used, and statistical evidence of psychometric properties were reported. Reports of strong psychometric properties of instruments were identified as well as those with little supporting evidence of psychometric soundness.\n\n\nFINDINGS\nReports frequently indicated content validity but sometimes the studies had fewer than five experts for review. Criterion validity was rarely reported and errors in the measurement of the criterion were identified. Construct validity remains underreported. Most reports indicated internal consistency reliability (alpha) but few reports included reliability testing for stability. When retest reliability was asserted, time intervals and correlations were frequently not included.\n\n\nCONCLUSIONS\nPlanning for psychometric testing through design and reducing nonrandom error in measurement will add to the reliability and validity of instruments and increase the strength of study findings. Underreporting of validity might occur because of small sample size, poor design, or lack of resources. Lack of information on psychometric properties and misapplication of psychometric testing is common in the literature.", "title": "" }, { "docid": "5fb05ef7a15c82c56a222a49a1cc7cf6", "text": "We describe Analyza, a system that helps lay users explore data. Analyza has been used within two large real world systems. The first is a question-and-answer feature in a spreadsheet product. The second provides convenient access to a revenue/inventory database for a large sales force. Both user bases consist of users who do not necessarily have coding skills, demonstrating Analyza's ability to democratize access to data. We discuss the key design decisions in implementing this system. For instance, how to mix structured and natural language modalities, how to use conversation to disambiguate and simplify querying, how to rely on the ``semantics' of the data to compensate for the lack of syntactic structure, and how to efficiently curate the data.", "title": "" }, { "docid": "7cecfd37e44b26a67bee8e9c7dd74246", "text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.", "title": "" }, { "docid": "1a819d090746e83676b0fc3ee94fd526", "text": "Brain-computer interfaces (BCIs) use signals recorded from the brain to operate robotic or prosthetic devices. Both invasive and noninvasive approaches have proven effective. Achieving the speed, accuracy, and reliability necessary for real-world applications remains the major challenge for BCI-based robotic control.", "title": "" }, { "docid": "7e941f9534357fca740b97a99e86f384", "text": "The head-direction (HD) cells found in the limbic system in freely mov ing rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be controlled accurately by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.", "title": "" }, { "docid": "e0ec89c103aedb1d04fbc5892df288a8", "text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.", "title": "" }, { "docid": "80a34e1544f9a20d6e1698278e0479b5", "text": "We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient descent constraint optimisation to provide further control over the generation process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and transferred as constraints to the newly generated material. The sampling process is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.", "title": "" }, { "docid": "36b7b37429a8df82e611df06303a8fcb", "text": "Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs) – semantic-preserving perturbations that induce changes in the model’s predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs) – simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.", "title": "" }, { "docid": "48a400878a5f1fbc3b7b109aa7e9bd2b", "text": "Mutation analysis is usually used to provide indication of the fault detection ability of a test set. It is mainly used for unit testing evaluation. This paper describes mutation analysis principles and their adaptation to the Lustre programming language. Alien-V, a mutation tool for Lustre is presented. Lesar modelchecker is used for eliminating equivalent mutant. A first experimentation to evaluate Lutess testing tool is summarized.", "title": "" }, { "docid": "a757624e5fd2d4a364f484d55a430702", "text": "The main challenge in P2P computing is to design and implement a robust and scalable distributed system composed of inexpensive, individually unreliable computers in unrelated administrative domains. The participants in a typical P2P system might include computers at homes, schools, and businesses, and can grow to several million concurrent participants.", "title": "" }, { "docid": "a0c9d3c2b14395a6d476b12c5e8b28b0", "text": "Undergraduate research experiences enhance learning and professional development, but providing effective and scalable research training is often limited by practical implementation and orchestration challenges. We demonstrate Agile Research Studios (ARS)---a socio-technical system that expands research training opportunities by supporting research communities of practice without increasing faculty mentoring resources.", "title": "" } ]
scidocsrr
383d3932f49d674b625568a3e6666d21
Designing urban media façades: cases and challenges
[ { "docid": "31ff39eb4322e9856f56729a4d068b73", "text": "Using media façades as a subcategory of urban computing, this paper contributes to the understanding of spatial interaction, sense-making, and social mediation as part of identifying key characteristics of interaction with media façades. Our research addresses in particular the open-ended but framed nature of interaction, which in conjunction with varying interpretations enables individual sense-making. Moreover, we contribute to the understanding of flexible social interaction by addressing urban interaction in relation to distributed attention, shared focus, dialogue and collective action. Finally we address challenges for interaction designers encountered in a complex spatial setting calling for a need to take into account multiple viewing and action positions. Our researchthrough-design approach has included a real-life design intervention in terms of the design, implementation, and reflective evaluation of a 180 m (1937 square feet) interactive media façade in operation 24/7 for more than 50 days.", "title": "" }, { "docid": "3cd39df3222b44989bc3c1e3c66a386e", "text": "In interaction design for experience-oriented uses of technology, a central facet of aesthetics of interaction is rooted in the user's experience of herself “performing her perception.” By drawing on performance (theater) theory, phenomenology and sociology and with references to recent HCI-work on the relation between the system and the performer/user and the spectator's relation to this dynamic, we show how the user is simultaneously operator, performer and spectator when interacting. By engaging with the system, she continuously acts out these three roles and her awareness of them is crucial in her experience. We argue that this 3-in-1 is always already shaping the user's understanding and perception of her interaction as it is staged through her experience of the object's form and expression. Through examples ranging from everyday technologies utilizing performances of interaction to spatial contemporary artworks, digital as well as analogue, we address the notion of the performative spectator and the spectating performer. We demonstrate how perception is also performative and how focus on this aspect seems to be crucial when designing experience-oriented products, systems and services.", "title": "" } ]
[ { "docid": "13db8cca0c58bb14a09effdf08cf909c", "text": "This study aimed to compare the vertical marginal gap of teeth restored with lithium disilicate crowns fabricated using CAD/CAM or by pressed ceramic approach. Twenty mandibular third molar teeth were collected after surgical extractions and prepared to receive full veneer crowns. Teeth were optically scanned and lithium disilicate blocks were used to fabricate crowns using CAD/CAM technique. Polyvinyl siloxane impressions of the prepared teeth were made and monolithic pressed lithium disilicate crowns were fabricated. The marginal gap was measured using optical microscope at 200× magnification (Keyence VHX-5000, Japan). Statistical analysis was performed using Wilcoxon test. The lithium disilicate pressed crowns had significantly smaller (p = 0.006) marginal gaps (38 ± 12 μm) than the lithium disilicate CAD/CAM crowns (45 ± 12 μm). This research indicates that lithium disilicate crowns fabricated with the press technique have measurably smaller marginal gaps compared with those fabricated with CAD/CAM technique within in vitro environments. The marginal gaps achieved by the crowns across all groups were within a clinically acceptable range.", "title": "" }, { "docid": "e8b29527805a29dfe12c22643345e440", "text": "Highly cited articles are interesting because of the potential association between high citation counts and high quality research. This study investigates the 82 most highly cited Information Science and Library Science’ (IS&LS) articles (the top 0.1%) in the Web of Science from the perspectives of disciplinarity, annual citation patterns, and first author citation profiles. First, the relative frequency of these 82 articles was much lower for articles solely in IS&LS than for those in IS&LS and at least one other subject, suggesting that that the promotion of interdisciplinary research in IS&LS may be conducive to improving research quality. Second, two thirds of the first authors had an h-index in IS&LS of less than eight, show that much significant research is produced by researchers without a high overall IS&LS research productivity. Third, there is a moderate correlation (0.46) between citation ranking and the number of years between peak year and year of publication. This indicates that high quality ideas and methods in IS&LS often are deployed many years after being published.", "title": "" }, { "docid": "ccbf1f33f16e7c5283f6f7cbb51d0edd", "text": "This paper reviews current research on supply chain management (SCM) within the context of tourism. SCM in the manufacturing industry has attracted widespread research interest over the past two decades, whereas studies of SCM in the tourism industry are very limited. Stakeholders in the tourism industry interact with each other to resolve their divergent business objectives across different operating systems. The potential benefit of considering not only individual enterprises but also the tourism value chain becomes evident. The paper examines the characteristics of tourism products, and identifies and explores core issues and concepts in tourism supply chains (TSCs) and tourism supply chain management (TSCM). Although there is an emerging literature on TSCM or its equivalents, progress is uneven, as most research focuses on distribution and marketing activities without fully considering the whole range of different suppliers involved in the provision and consumption of tourism products. This paper provides a systematic review of current tourism studies from the TSCM perspective and develops a framework for TSCM research that should be of great value not only to those who wish to extend their research into this new and exciting area, but also to tourism and hospitality decision makers. The paper also identifies key research questions in TSCM worthy of future theoretical and empirical exploration. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d13ce7762aeded7a40a7fbe89f1beccf", "text": "[Purpose] This study aims to examined the effect of the self-myofascial release induced with a foam roller on the reduction of stress by measuring the serum concentration of cortisol. [Subjects and Methods] The subjects of this study were healthy females in their 20s. They were divided into the experimental and control groups. Both groups, each consisting of 12 subjects, were directed to walk for 30 minutes on a treadmill. The control group rested for 30 minutes of rest by lying down, whereas the experimental group was performed a 30 minutes of self-myofascial release program. [Results] Statistically significant levels of cortisol concentration reduction were observed in both the experimental group, which used the foam roller, and the control group. There was no statistically significant difference between the two groups. [Conclusion] The Self-myofascial release induced with a foam roller did not affect the reduction of stress.", "title": "" }, { "docid": "47084e8587696dc9d392d895a99ddb83", "text": "We present an online approach to efficiently and simultaneously detect and track the 2D pose of multiple people in a video sequence. We build upon Part Affinity Field (PAF) representation designed for static images, and propose an architecture that can encode and predict Spatio-Temporal Affinity Fields (STAF) across a video sequence. In particular, we propose a novel temporal topology cross-linked across limbs which can consistently handle body motions of a wide range of magnitudes. Additionally, we make the overall approach recurrent in nature, where the network ingests STAF heatmaps from previous frames and estimates those for the current frame. Our approach uses only online inference and tracking, and is currently the fastest and the most accurate bottom-up approach that is runtime invariant to the number of people in the scene and accuracy invariant to input frame rate of camera. Running at ∼30 fps on a single GPU at single scale, it achieves highly competitive results on the PoseTrack benchmarks. 1", "title": "" }, { "docid": "ae6e93de72d10589551b441eaf5077ae", "text": "The interest in cloud computing has increased rapidly in the last two decades. This increased interest is attributed to the important role played by cloud computing in the various aspects of our life. Cloud computing is recently emerged as a new paradigm for hosting and delivering services over the Internet. It is attractive to business owners as well as to researchers as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. As cloud computing is done through the Internet, it faces several kinds of threats due to its nature, where it depends on the network and its users who are distributed around the world. These threats differ in type, its side effect, its reasons, and its main purposes. This survey presents the most critical threats to cloud computing with its impacts, its reasons, and some suggested solutions. In addition, this survey determines what the main aspects of the cloud and the security attributes that are affected by each one of these threats. As a result of this survey, we order the most critical threats according to the level of its impact.", "title": "" }, { "docid": "505137d61a0087e054a2cf09c8addb4b", "text": "A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs.", "title": "" }, { "docid": "2ed183563bd5cdaafa96b03836883730", "text": "This is an introduction to the Classic Paper on MOSFET scaling by R. Dennardet al., “Design of Ion-Implanted MOSFET’s with Very Small Physical Dimensions,” published in the IEEE Journal of Solid-State Circuitsin October 1974. The history of scaling and its application to very large scale integration (VLSI) MOSFET technology is traced from 1970 to 1998. The role of scaling in the profound improvements in power delay product over the last three decades is analyzed in basic terms.", "title": "" }, { "docid": "d880349c2760a8cd71d86ea3212ba1f0", "text": "As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.", "title": "" }, { "docid": "6f3985fa6c66bca088394947e0db9e28", "text": "This paper aims to check if the current and prompt technological revolution altering the whole world has crucial impacts on the Tunisian banking sector. Particularly, this study seeks some clues on which we can rely in order to understand the customers’ behavior regarding the adoption of electronic banking. To achieve this purpose, an empirical research is carried out in Tunisia and it reveals that a panoply of factors is affecting the customers-attitude toward e-banking. For instance; age, gender and educational qualifications seem to be important and they split up the group into electronic banking adopters and traditional banking defenders and so, they have significant influence on the customers’ adoption of e-banking. Furthermore, this study shows that despite the presidential incentives and in spite of being fully aware of the e-banking’s benefits, numerous respondents are still using the conventional banking. It is worthy to mention that the fear of loss because of transactions errors or hackers plays a significant role in alienating Tunisian customers from online banking. Finally, a number of this study’s limitations are highlighted and some research perspectives are suggested. JIBC December 2009, Vol. 14, No. 3 2", "title": "" }, { "docid": "a04c057110048695669feef07638ef3c", "text": "The structure of recent models of the relationship between natural resource abundance or intensity and economic growth is nearly always the same. An abundance of or heavy dependence on natural resources is taken to influence some variable or mechanism “X” which impedes growth. An important challenge for economic growth theorists and empirical workers in the field is to identify and map these intermediate variables and mechanisms. To date, four main channels of transmission from natural resource abundance or intensity to slow economic growth have been suggested in the literature. As we shall see, these channels can be described as crowding out: natural capital, it will be argued, tends to crowd out other types of capital and thereby inhibit economic growth.", "title": "" }, { "docid": "c28431406873b682a5dabb8a8fed510f", "text": "Business Intelligence (BI) tools provide fundamental support for analyzing large volumes of information. Data Warehouses (DW) and Online Analytical Processing (OLAP) tools are used to store and analyze data. Nowadays more and more information is available on the Web in the form of Resource Description Framework (RDF), and BI tools have a huge potential of achieving better results by integrating real-time data from web sources into the analysis process. In this paper, we describe a framework for so-called exploratory OLAP over RDF sources. We propose a system that uses a multidimensional schema of the OLAP cube expressed in RDF vocabularies. Based on this information the system is able to query data sources, extract and aggregate data, and build a cube. We also propose a computer-aided process for discovering previously unknown data sources and building a multidimensional schema of the cube. We present a use case to demonstrate the applicability of the approach.", "title": "" }, { "docid": "69c9aa877b9416e2a884eaa5408eb890", "text": "Integrating trust and automation in finance.", "title": "" }, { "docid": "38a4f83778adea564e450146060ef037", "text": "The last few years have seen a surge in the number of accurate, fast, publicly available dependency parsers. At the same time, the use of dependency parsing in NLP applications has increased. It can be difficult for a non-expert to select a good “off-the-shelf” parser. We present a comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of English. For our analysis, we developed a new web-based tool that gives a convenient way of comparing dependency parser outputs. Our analysis will help practitioners choose a parser to optimize their desired speed/accuracy tradeoff, and our tool will help practitioners examine and compare parser output.", "title": "" }, { "docid": "d484c24551191360bc05b768e2fa9957", "text": "The paper aims to develop and design a cloud-based Quran portal using Drupal technology and make it available in multiple services. The portal can be hosted on cloud and users around the world can access it using any Internet enabled device. The proposed portal includes different features to become a center of learning resources for various users. The portal is further designed to promote research and development of new tools and applications includes Application Programming Interface (API) and Search API, which exposes the search to public, and make the searching Quran efficient and easy. The cloud application can request various surah or ayah using the API and by passing filter.", "title": "" }, { "docid": "fcf7f7562fe3e01bba64a61b7f54b04c", "text": "IMPORTANCE\nBoth bullies and victims of bullying are at risk for psychiatric problems in childhood, but it is unclear if this elevated risk extends into early adulthood.\n\n\nOBJECTIVE\nTo test whether bullying and/or being bullied in childhood predicts psychiatric problems and suicidality in young adulthood after accounting for childhood psychiatric problems and family hardships.\n\n\nDESIGN\nProspective, population-based study.\n\n\nSETTING\nCommunity sample from 11 counties in Western North Carolina.\n\n\nPARTICIPANTS\nA total of 1420 participants who had being bullied and bullying assessed 4 to 6 times between the ages of 9 and 16 years. Participants were categorized as bullies only, victims only, bullies and victims (hereafter referred to as bullies/victims), or neither.\n\n\nMAIN OUTCOME MEASURE\nPsychiatric outcomes, which included depression, anxiety, antisocial personality disorder, substance use disorders, and suicidality (including recurrent thoughts of death, suicidal ideation, or a suicide attempt), were assessed in young adulthood (19, 21, and 24-26 years) by use of structured diagnostic interviews. RESULTS Victims and bullies/victims had elevated rates of young adult psychiatric disorders, but also elevated rates of childhood psychiatric disorders and family hardships. After controlling for childhood psychiatric problems or family hardships, we found that victims continued to have a higher prevalence of agoraphobia (odds ratio [OR], 4.6 [95% CI, 1.7-12.5]; P < .01), generalized anxiety (OR, 2.7 [95% CI, 1.1-6.3]; P < .001), and panic disorder (OR, 3.1 [95% CI, 1.5-6.5]; P < .01) and that bullies/victims were at increased risk of young adult depression (OR, 4.8 [95% CI, 1.2-19.4]; P < .05), panic disorder (OR, 14.5 [95% CI, 5.7-36.6]; P < .001), agoraphobia (females only; OR, 26.7 [95% CI, 4.3-52.5]; P < .001), and suicidality (males only; OR, 18.5 [95% CI, 6.2-55.1]; P < .001). Bullies were at risk for antisocial personality disorder only (OR, 4.1 [95% CI, 1.1-15.8]; P < .04).\n\n\nCONCLUSIONS AND RELEVANCE\nThe effects of being bullied are direct, pleiotropic, and long-lasting, with the worst effects for those who are both victims and bullies.", "title": "" }, { "docid": "27bba2c0a5d3d7f3260b64c3fb0ef4f6", "text": "Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only give a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The \"central hit strategy\" selectively targets central nodes/edges of the flexible networks of infectious agents or cancer cells to kill them. The \"network influence strategy\" works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved by targeting the neighbors of central nodes/edges. It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing >1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.", "title": "" }, { "docid": "70a8de504b5ab8cdea1b87ab6028a3f3", "text": "There are two major challenges in universal stair climbing: stairs without riser and with nose, and stairs with various dimensions. In this study, we proposed an indoor robot platform to overcome these challenges. First, to create an angle of attack, the Tusk, a passive, protruded element, was added in front of a 4-wheel robot. For design analysis and optimization of the Tusk, a simplified model of universal stair climbing was applied. To accommodate stairs without risers and with nose, the assistive track mechanism was applied. To climb the stair regardless of its dimension, length-adjustable mechanism was added. The results indicated the robot with these mechanisms successfully overcame each challenge. The performance was better than most conventional stair-climbing robots in terms of the range of compatible stairs. We expect these new approaches to expand the range of indoor robot operation with minimal cost.", "title": "" }, { "docid": "14520419a4b0e27df94edc4cf23cde65", "text": "In this paper we propose and examine non–parametric statistical tests to define similarity and homogeneity measure s for textures. The statistical tests are applied to the coeffi cients of images filtered by a multi–scale Gabor filter bank. We will demonstrate that these similarity measures are useful for both, texture based image retrieval and for unsupervised texture segmentation, and hence offer an unified approach to these closely related tasks. We present results on Brodatz–like micro–textures and a collection of real–word images.", "title": "" }, { "docid": "a18ef88938a0d391874a8be61c27694a", "text": "A growing body of literature has emerged that focuses upon cognitive assessment of video game player experience. Given the growing popularity of video gaming and the increasing literature on cognitive aspects of video gamers, there is a growing need for novel approaches to assessment of the cognitive processes that occur while persons are immersed in video games. In this study, we assessed various stimulus modalities and gaming events using an off-the-shelf EEG devise. A significant difference was found among different stimulus modalities with increasingly difficult cognitive demands. Specifically, beta and gamma power were significantly increased during high intensity events when compared to low intensity gaming events. Our findings suggest that the Emotiv EEG can be used to differentiate between varying stimulus modalities and accompanying cognitive processes. 2015 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
f6a2d39376c399ca6603cef87034eb89
Digital Advertising Traffic Operation: Machine Learning for Process Discovery
[ { "docid": "2c92948916257d9b164e7d65aa232d3e", "text": "Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we propose a technique for rediscovering workflow models. This technique uses workflow logs to discover the workflow process as it is actually being executed. The workflow log contains information about events taking place. We assume that these events are totally ordered and each event refers to one task being executed for a single case. This information can easily be extracted from transactional information systems (e.g., Enterprise Resource Planning systems such as SAP and Baan). The rediscovering technique proposed in this paper can deal with noise and can also be used to validate workflow processes by uncovering and measuring the discrepancies between prescriptive models and actual process executions.", "title": "" } ]
[ { "docid": "b9b2db174b8fa77516f1c03186a993da", "text": "The cutting stock problem it is of great interest in relation with several real world problems. Basically it means that there are some smaller pieces that have to be cut from a greater stock piece, in such a way, that the remaining part of the stock piece should be minimal. The classical solution methods of this problem generally need a great amount of calculation. In order to reduce the computational load they use heuristics. A newer solution method is presented in this paper, which is based on a genetic technique. This method uses a tree representation of the cutting pattern, and combines different patterns in order to achive patterns with higher performance. The combination of the cutting patterns is realized by a combined crossover mutation operator. An application of the proposed method is presented briefly in the end of the paper.", "title": "" }, { "docid": "f0c25bb609bc6946b558bcd0ccdaee22", "text": "A biologically motivated computational model of bottom-up visual selective attention was used to examine the degree to which stimulus salience guides the allocation of attention. Human eye movements were recorded while participants viewed a series of digitized images of complex natural and artificial scenes. Stimulus dependence of attention, as measured by the correlation between computed stimulus salience and fixation locations, was found to be significantly greater than that expected by chance alone and furthermore was greatest for eye movements that immediately follow stimulus onset. The ability to guide attention of three modeled stimulus features (color, intensity and orientation) was examined and found to vary with image type. Additionally, the effect of the drop in visual sensitivity as a function of eccentricity on stimulus salience was examined, modeled, and shown to be an important determiner of attentional allocation. Overall, the results indicate that stimulus-driven, bottom-up mechanisms contribute significantly to attentional guidance under natural viewing conditions.", "title": "" }, { "docid": "b206560e0c9f3e59c8b9a8bec6f12462", "text": "A symmetrical microstrip directional coupler design using the synthesis technique without prior knowledge of the physical geometry of the directional coupler is analytically given. The introduced design method requires only the information of the port impedances, the coupling level, and the operational frequency. The analytical results are first validated by using a planar electromagnetic simulation tool and then experimentally verified. The error between the experimental and analytical results is found to be within 3% for the worst case. The design charts that give all the physical dimensions, including the length of the directional coupler versus frequency and different coupling levels, are given for alumina, Teflon, RO4003, FR4, and RF-60, which are widely used in microwave applications. The complete design of symmetrical two-line microstrip directional couplers can be obtained for the first time using our results in this paper.", "title": "" }, { "docid": "ff9ca485a07dca02434396eca0f0c94f", "text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.", "title": "" }, { "docid": "97a1453d230df4f8c57eed1d3a1aaa19", "text": "In this letter, an isolation improvement method between two closely packed planar inverted-F antennas (PIFAs) is proposed via a miniaturized ground slot with a chip capacitor. The proposed T-shaped ground slot acts as a notch filter, and the capacitor is utilized to reduce the slot length. The equivalent circuit model of the proposed slot with the capacitor is derived. The measured isolation between two PIFAs is down to below -20 dB at the whole WLAN band of 2.4 GHz.", "title": "" }, { "docid": "470810494ae81cc2361380c42116c8d7", "text": "Sustainability is significantly important for fashion business due to consumers’ increasing awareness of environment. When a fashion company aims to promote sustainability, the main linkage is to develop a sustainable supply chain. This paper contributes to current knowledge of sustainable supply chain in the textile and clothing industry. We first depict the structure of sustainable fashion supply chain including eco-material preparation, sustainable manufacturing, green distribution, green retailing, and ethical consumers based on the extant literature. We study the case of the Swedish fast fashion company, H&M, which has constructed its sustainable supply chain in developing eco-materials, providing safety training, monitoring sustainable manufacturing, reducing carbon emission in distribution, and promoting eco-fashion. Moreover, based on the secondary data and analysis, we learn the lessons of H&M’s sustainable fashion supply chain from the country perspective: (1) the H&M’s sourcing managers may be more likely to select suppliers in the countries with lower degrees of human wellbeing; (2) the H&M’s supply chain manager may set a higher level of inventory in a country with a higher human wellbeing; and (3) the H&M CEO may consider the degrees of human wellbeing and economic wellbeing, instead of environmental wellbeing when launching the online shopping channel in a specific country.", "title": "" }, { "docid": "e1066f3b7ff82667dbc7186f357dd406", "text": "Generative adversarial networks (GANs) are becoming increasingly popular for image processing tasks. Researchers have started using GAN s for speech enhancement, but the advantage of using the GAN framework has not been established for speech enhancement. For example, a recent study reports encouraging enhancement results, but we find that the architecture of the generator used in the GAN gives better performance when it is trained alone using the $L_1$ loss. This work presents a new GAN for speech enhancement, and obtains performance improvement with the help of adversarial training. A deep neural network (DNN) is used for time-frequency mask estimation, and it is trained in two ways: regular training with the $L_1$ loss and training using the GAN framework with the help of an adversary discriminator. Experimental results suggest that the GAN framework improves speech enhancement performance. Further exploration of loss functions, for speech enhancement, suggests that the $L_1$ loss is consistently better than the $L_2$ loss for improving the perceptual quality of noisy speech.", "title": "" }, { "docid": "f1d1a73f21dcd1d27da4e9d4a93c5581", "text": "Movements of interfaces can be analysed in terms of whether they are sensible, sensable and desirable. Sensible movements are those that users naturally perform; sensable are those that can be measured by a computer; and desirable movements are those that are required by a given application. We show how a systematic comparison of sensible, sensable and desirable movements, especially with regard to how they do not precisely overlap, can reveal potential problems with an interface and also inspire new features. We describe how this approach has been applied to the design of three interfaces: the Augurscope II, a mobile augmented reality interface for outdoors; the Drift Table, an item of furniture that uses load sensing to control the display of aerial photographs; and pointing flashlights at walls and posters in order to play sounds.", "title": "" }, { "docid": "1b7efa9ffda9aa23187ae7028ea5d966", "text": "Tools for clinical assessment and escalation of observation and treatment are insufficiently established in the newborn population. We aimed to provide an overview over early warning- and track and trigger systems for newborn infants and performed a nonsystematic review based on a search in Medline and Cinahl until November 2015. Search terms included 'infant, newborn', 'early warning score', and 'track and trigger'. Experts in the field were contacted for identification of unpublished systems. Outcome measures included reference values for physiological parameters including respiratory rate and heart rate, and ways of quantifying the extent of deviations from the reference. Only four neonatal early warning scores were published in full detail, and one system for infants with cardiac disease was considered as having a more general applicability. Temperature, respiratory rate, heart rate, SpO2, capillary refill time, and level of consciousness were parameters commonly included, but the definition and quantification of 'abnormal' varied slightly. The available scoring systems were designed for term and near-term infants in postpartum wards, not neonatal intensive care units. In conclusion, there is a limited availability of neonatal early warning scores. Scoring systems for high-risk neonates in neonatal intensive care units and preterm infants were not identified.", "title": "" }, { "docid": "d3834e337ca661d3919674a8acc1fa0c", "text": "Relative (or receiver) operating characteristic (ROC) curves are a graphical representation of the relationship between sensitivity and specificity of a laboratory test over all possible diagnostic cutoff values. Laboratory medicine has been slow to adopt the use of ROC curves for the analysis of diagnostic test performance. In this tutorial, we discuss the advantages and limitations of the ROC curve for clinical decision making in laboratory medicine. We demonstrate the construction and statistical uses of ROC analysis, review its published applications in clinical pathology, and comment on its role in the decision analytic framework in laboratory medicine.", "title": "" }, { "docid": "049c9e3abf58bfd504fa0645bb4d1fdc", "text": "The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.", "title": "" }, { "docid": "74a9612c1ca90a9d7b6152d19af53d29", "text": "Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.", "title": "" }, { "docid": "45be2fbf427a3ea954a61cfd5150db90", "text": "Linguistic style conveys the social context in which communication occurs and defines particular ways of using language to engage with the audiences to which the text is accessible. In this work, we are interested in the task of stylistic transfer in natural language generation (NLG) systems, which could have applications in the dissemination of knowledge across styles, automatic summarization and author obfuscation. The main challenges in this task involve the lack of parallel training data and the difficulty in using stylistic features to control generation. To address these challenges, we plan to investigate neural network approaches to NLG to automatically learn and incorporate stylistic features in the process of language generation. We identify several evaluation criteria, and propose manual and automatic evaluation approaches.", "title": "" }, { "docid": "2da6c199c7561855fde9be6f4798a4af", "text": "Ontogenetic development of the digestive system in golden pompano (Trachinotus ovatus, Linnaeus 1758) larvae was histologically and enzymatically studied from hatch to 32 day post-hatch (DPH). The development of digestive system in golden pompano can be divided into three phases: phase I starting from hatching and ending at the onset of exogenous feeding; phase II starting from first feeding (3 DPH) and finishing at the formation of gastric glands; and phase III starting from the appearance of gastric glands on 15 DPH and continuing onward. The specific activities of trypsin, amylase, and lipase increased sharply from the onset of first feeding to 5–7 DPH, followed by irregular fluctuations. Toward the end of this study, the specific activities of trypsin and amylase showed a declining trend, while the lipase activity remained at similar levels as it was at 5 DPH. The specific activity of pepsin was first detected on 15 DPH and increased with fish age. The dynamics of digestive enzymes corresponded to the structural development of the digestive system. The enzyme activities tend to be stable after the formation of the gastric glands in fish stomach on 15 DPH. The composition of digestive enzymes in larval pompano indicates that fish are able to digest protein, lipid and carbohydrate at early developmental stages. Weaning of larval pompano is recommended from 15 DPH onward. Results of the present study lead to a better understanding of the ontogeny of golden pompano during the larval stage and provide a guide to feeding and weaning of this economically important fish in hatcheries.", "title": "" }, { "docid": "9daa362cc15e988abdc117786b000741", "text": "The objective of this paper is to develop the hybrid neural network models for bankruptcy prediction. The proposed hybrid neural network models are (1) a MDA-assisted neural network, (2) an ID3-assisted neural network, and (3) a SOFM(self organizing feature map)-assisted neural network. Both the MDA-assisted neural network and the ID3-assisted neural network are the neural network models operating with the input variables selected by the MDA method and ID3 respectively. The SOFM-assisted neural network combines a backpropagation model (supervised learning) with a SOFM model (unsupervised learning). The performance of the hybrid neural network model is evaluated using MDA and ID3 as a benchmark. Empirical results using Korean bankruptcy data show that hybrid neural network models are very promising neural network models for bankruptcy prediction in terms of predictive accuracy and adaptability.", "title": "" }, { "docid": "39188ae46f22dd183f356ba78528b720", "text": "Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this paper we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalisation, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyse the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better capitalised banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tiering) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out.", "title": "" }, { "docid": "ca62a58ac39d0c2daaa573dcb91cd2e0", "text": "Blast-related head injuries are one of the most prevalent injuries among military personnel deployed in service of Operation Iraqi Freedom. Although several studies have evaluated symptoms after blast injury in military personnel, few studies compared them to nonblast injuries or measured symptoms within the acute stage after traumatic brain injury (TBI). Knowledge of acute symptoms will help deployed clinicians make important decisions regarding recommendations for treatment and return to duty. Furthermore, differences more apparent during the acute stage might suggest important predictors of the long-term trajectory of recovery. This study evaluated concussive, psychological, and cognitive symptoms in military personnel and civilian contractors (N = 82) diagnosed with mild TBI (mTBI) at a combat support hospital in Iraq. Participants completed a clinical interview, the Automated Neuropsychological Assessment Metric (ANAM), PTSD Checklist-Military Version (PCL-M), Behavioral Health Measure (BHM), and Insomnia Severity Index (ISI) within 72 hr of injury. Results suggest that there are few differences in concussive symptoms, psychological symptoms, and neurocognitive performance between blast and nonblast mTBIs, although clinically significant impairment in cognitive reaction time for both blast and nonblast groups is observed. Reductions in ANAM accuracy were related to duration of loss of consciousness, not injury mechanism.", "title": "" }, { "docid": "fd208ec9a2d74306495ac8c6d454bfd6", "text": "This qualitative study investigates the perceptions of suburban middle school students’ on academic motivation and student engagement. Ten students, grades 6-8, were randomly selected by the researcher from school counselors’ caseloads and the primary data collection techniques included two types of interviews; individual interviews and focus group interviews. Findings indicate students’ motivation and engagement in middle school is strongly influenced by the social relationships in their lives. The interpersonal factors identified by students were peer influence, teacher support and teacher characteristics, and parental behaviors. Each of these factors consisted of academic and social-emotional support which hindered and/or encouraged motivation and engagement. Students identified socializing with their friends as a means to want to be in school and to engage in learning. Also, students are more engaged and motivated if they believe their teachers care about their academic success and value their job. Lastly, parental involvement in academics appeared to be more crucial for younger students than older students in order to encourage motivation and engagement in school. MIDDLE SCHOOL STUDENTS’ PERCEPTIONS 5 Middle School Students’ Perceptions on Student Engagement and Academic Motivation Middle School Students’ Perceptions on Student Engagement and Academic Motivation Early adolescence marks a time for change for students academically and socially. Students are challenged academically in the sense that there is greater emphasis on developing specific intellectual and cognitive capabilities in school, while at the same time they are attempting to develop social skills and meaningful relationships. It is often easy to overlook the social and interpersonal challenges faced by students in the classroom when there is a large focus on grades in education, especially since teachers’ competencies are often assessed on their students’ academic performance. When schools do not consider psychosocial needs of students, there is a decrease in academic motivation and interest, lower levels of student engagement and poorer academic performance (i.e. grades) for middle school students (Wang & Eccles, 2013). In fact, students who report high levels of engagement in school are 75% more likely to have higher grades and higher attendance rates. Disengaged students tend to have lower grades and are more likely to drop out of school (Klem & Connell, 2004). Therefore, this research has focused on understanding the connections between certain interpersonal influences and academic motivation and engagement.", "title": "" }, { "docid": "d4bd583808c9e105264c001cbcb6b4b0", "text": "It is common for clinicians, researchers, and public policymakers to describe certain drugs or objects (e.g., games of chance) as “addictive,” tacitly implying that the cause of addiction resides in the properties of drugs or other objects. Conventional wisdom encourages this view by treating different excessive behaviors, such as alcohol dependence and pathological gambling, as distinct disorders. Evidence supporting a broader conceptualization of addiction is emerging. For example, neurobiological research suggests that addictive disorders might not be independent:2 each outwardly unique addiction disorder might be a distinctive expression of the same underlying addiction syndrome. Recent research pertaining to excessive eating, gambling, sexual behaviors, and shopping also suggests that the existing focus on addictive substances does not adequately capture the origin, nature, and processes of addiction. The current view of separate addictions is similar to the view espoused during the early days of AIDS diagnosis, when rare diseases were not", "title": "" }, { "docid": "a9a22c9c57e9ba8c3deefbea689258d5", "text": "Functional neuroimaging studies have shown that romantic love and maternal love are mediated by regions specific to each, as well as overlapping regions in the brain's reward system. Nothing is known yet regarding the neural underpinnings of unconditional love. The main goal of this functional magnetic resonance imaging study was to identify the brain regions supporting this form of love. Participants were scanned during a control condition and an experimental condition. In the control condition, participants were instructed to simply look at a series of pictures depicting individuals with intellectual disabilities. In the experimental condition, participants were instructed to feel unconditional love towards the individuals depicted in a series of similar pictures. Significant loci of activation were found, in the experimental condition compared with the control condition, in the middle insula, superior parietal lobule, right periaqueductal gray, right globus pallidus (medial), right caudate nucleus (dorsal head), left ventral tegmental area and left rostro-dorsal anterior cingulate cortex. These results suggest that unconditional love is mediated by a distinct neural network relative to that mediating other emotions. This network contains cerebral structures known to be involved in romantic love or maternal love. Some of these structures represent key components of the brain's reward system.", "title": "" } ]
scidocsrr
6d3ee2b196185ba6e9b63886f85de141
Enhancing First Story Detection using Word Embeddings
[ { "docid": "a85c13406ddc3dc057f029ba96fdffe1", "text": "We apply statistical machine translation (SMT) tools to generate novel paraphrases of input sentences in the same language. The system is trained on large volumes of sentence pairs automatically extracted from clustered news articles available on the World Wide Web. Alignment Error Rate (AER) is measured to gauge the quality of the resulting corpus. A monotone phrasal decoder generates contextual replacements. Human evaluation shows that this system outperforms baseline paraphrase generation techniques and, in a departure from previous work, offers better coverage and scalability than the current best-of-breed paraphrasing approaches.", "title": "" } ]
[ { "docid": "3b9ab1832864eda1a67fc46d425de468", "text": "Wind-photovoltaic hybrid system (WPHS) utilization is becoming popular due to increasing energy costs and decreasing prices of turbines and photovoltaic (PV) panels. However, prior to construction of a renewable generation station, it is necessary to determine the optimum number of PV panels and wind turbines for minimal cost during continuity of generated energy to meet the desired consumption. In fact, the traditional sizing procedures find optimum number of the PV modules and wind turbines subject to minimum cost. However, the optimum battery capacity is either not taken into account, or it is found by a full search between all probable solution spaces which requires extensive computation. In this study, a novel description of the production/consumption phenomenon is proposed, and a new sizing procedure is developed. Using this procedure, optimum battery capacity, together with optimum number of PV modules and wind turbines subject to minimum cost can be obtained with good accuracy. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3eb3ae4ac8236851b1399629b9577085", "text": "We study the problem of troubleshooting machine learning systems that rely on analytical pipelines of distinct components. Understanding and fixing errors that arise in such integrative systems is difficult as failures can occur at multiple points in the execution workflow. Moreover, errors can propagate, become amplified or be suppressed, making blame assignment difficult. We propose a human-in-the-loop methodology which leverages human intellect for troubleshooting system failures. The approach simulates potential component fixes through human computation tasks and measures the expected improvements in the holistic behavior of the system. The method provides guidance to designers about how they can best improve the system. We demonstrate the effectiveness of the approach on an automated image captioning system that has been pressed into real-world use.", "title": "" }, { "docid": "867ddbd84e8544a5c2d6f747756ca3d9", "text": "We report a 166 W burst mode pulse fiber amplifier seeded by a Q-switched mode-locked all-fiber laser at 1064 nm based on a fiber-coupled semiconductor saturable absorber mirror. With a pump power of 230 W at 976 nm, the output corresponds to a power conversion efficiency of 74%. The repetition rate of the burst pulse is 20 kHz, the burst energy is 8.3 mJ, and the burst duration is ∼ 20 μs, which including about 800 mode-locked pulses at a repetition rate of 40 MHz and the width of the individual mode-locked pulse is measured to be 112 ps at the maximum output power. To avoid optical damage to the fiber, the initial mode-locked pulses were stretched to 72 ps by a bandwidth-limited fiber bragg grating. After a two-stage preamplifier, the pulse width was further stretched to 112 ps, which is a result of self-phase modulation of the pulse burst during the amplification.", "title": "" }, { "docid": "d79a4bae5e7464d2f2acc51f1f22ccbe", "text": "The inductance calculation and the layout optimization for spiral inductors still are research topics of actuality and very interesting especially in radio frequency integrated circuits. Our research work is fixed on the vast topics of this research area. In this effect we create a software program dedicate to dc inductance calculation and to layout optimization for spiral inductors. We use a wide range of inductors in the application made with our program; we compare our applications results with measurements results existing in the literature and with three-dimensional commercial field solver results in order to validate it. Our program is accurate enough, has a very friendly interface, is very easy to use and its running time is very sort compared with other similar programs. Since spiral inductors tolerance is generally on the order of several percent, a more accurate program is not needed in practice. The program is very useful for the spiral inductor design because it calculates the inductance of spiral inductors with a very good accuracy and also for the spiral inductor optimization, because it optimize the spiral inductor layouts in terms of technological restrictions and/or in terms of the designers' needs.", "title": "" }, { "docid": "67d141b8e53e1398b6988e211d16719e", "text": "the recent advancement of networking technology has enabled the streaming of video content over wired/wireless network to a great extent. Video streaming includes various types of video content, namely, IP television (IPTV), Video on demand (VOD), Peer-to-Peer (P2P) video sharing, Voice (and video) over IP (VoIP) etc. The consumption of the video contents has been increasing a lot these days and promises a huge potential for the network provider, content provider and device manufacturers. However, from the end user's perspective there is no universally accepted existing standard metric, which will ensure the quality of the application/utility to meet the user's desired experience. In order to fulfill this gap, a new metric, called Quality of Experience (QoE), has been proposed in numerous researches recently. Our aim in this paper is to research the evolution of the term QoE, find the influencing factors of QoE metric especially in video streaming and finally QoE modelling and methodologies in practice.", "title": "" }, { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" }, { "docid": "92c91a8e9e5eec86f36d790dec8020e7", "text": "Aspect-based opinion mining, which aims to extract aspects and their corresponding ratings from customers reviews, provides very useful information for customers to make purchase decisions. In the past few years several probabilistic graphical models have been proposed to address this problem, most of them based on Latent Dirichlet Allocation (LDA). While these models have a lot in common, there are some characteristics that distinguish them from each other. These fundamental differences correspond to major decisions that have been made in the design of the LDA models. While research papers typically claim that a new model outperforms the existing ones, there is normally no \"one-size-fits-all\" model. In this paper, we present a set of design guidelines for aspect-based opinion mining by discussing a series of increasingly sophisticated LDA models. We argue that these models represent the essence of the major published methods and allow us to distinguish the impact of various design decisions. We conduct extensive experiments on a very large real life dataset from Epinions.com (500K reviews) and compare the performance of different models in terms of the likelihood of the held-out test set and in terms of the accuracy of aspect identification and rating prediction.", "title": "" }, { "docid": "0bd7c453279c97333e7ac6c52f7127d8", "text": "Among various biometric modalities, signature verification remains one of the most widely used methods to authenticate the identity of an individual. Signature verification, the most important component of behavioral biometrics, has attracted significant research attention over the last three decades. Despite extensive research, the problem still remains open to research due to the variety of challenges it offers. The high intra-class variations in signatures resulting from different physical or mental states of the signer, the differences that appear with aging and the visual similarity in case of skilled forgeries etc. are only a few of the challenges to name. This paper is intended to provide a review of the recent advancements in offline signature verification with a discussion on different types of forgeries, the features that have been investigated for this problem and the classifiers employed. The pros and cons of notable recent contributions to this problem have also been presented along with a discussion of potential future research directions on this subject.", "title": "" }, { "docid": "3867ff9ac24349b17e50ec2a34e84da4", "text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.", "title": "" }, { "docid": "e875d4a88e73984e37f5ce9ffe543791", "text": "A set of face stimuli called the NimStim Set of Facial Expressions is described. The goal in creating this set was to provide facial expressions that untrained individuals, characteristic of research participants, would recognize. This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.", "title": "" }, { "docid": "be69820b8b0f80c9bb9c56d4652645da", "text": "Intel Software Guard Extensions (SGX) is an emerging trusted hardware technology. SGX enables user-level code to allocate regions of trusted memory, called enclaves, where the confidentiality and integrity of code and data are guaranteed. While SGX offers strong security for applications, one limitation of SGX is the lack of system call support inside enclaves, which leads to a non-trivial, refactoring effort when protecting existing applications with SGX. To address this issue, previous works have ported existing library OSes to SGX. However, these library OSes are suboptimal in terms of security and performance since they are designed without taking into account the characteristics of SGX.\n In this paper, we revisit the library OS approach in a new setting---Intel SGX. We first quantitatively evaluate the performance impact of enclave transitions on SGX programs, identifying it as a performance bottleneck for any library OSes that aim to support system-intensive SGX applications. We then present the design and implementation of SGXKernel, an in-enclave library OS, with highlight on its switchless design, which obviates the needs for enclave transitions. This switchless design is achieved by incorporating two novel ideas: asynchronous cross-enclave communication and preemptible in-enclave multi-threading. We intensively evaluate the performance of SGXKernel on microbenchmarks and application benchmarks. The results show that SGXKernel significantly outperforms a state-of-the-art library OS that has been ported to SGX.", "title": "" }, { "docid": "87eb69d6404bf42612806a5e6d67e7bb", "text": "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.", "title": "" }, { "docid": "1e347f69d739577d4bb0cc050d87eb5b", "text": "The rapidly growing paradigm of the Internet of Things (IoT) requires new search engines, which can crawl heterogeneous data sources and search in highly dynamic contexts. Existing search engines cannot meet these requirements as they are designed for traditional Web and human users only. This is contrary to the fact that things are emerging as major producers and consumers of information. Currently, there is very little work on searching IoT and a number of works claim the unavailability of public IoT data. However, it is dismissed that a majority of real-time web-based maps are sharing data that is generated by things, directly. To shed light on this line of research, in this paper, we firstly create a set of tools to capture IoT data from a set of given data sources. We then create two types of interfaces to provide real-time searching services on dynamic IoT data for both human and machine users.", "title": "" }, { "docid": "ec48c81a61954e9c6f262c508b3cdaa7", "text": "Why should policymakers and practitioners care about the scholarly study of international affairs? Those who conduct foreign policy often dismiss academic theorists (frequently, one must admit, with good reason), but there is an inescapable link between the abstract world of theory and the real world of policy. We need theories to make sense of the blizzard of information that bombards us daily. Even policymakers who are contemptuous of \"theory\" must rely on their own (often unstated) ideas about how the world works in order to decide what to do. It is hard to make good policy if one's basic organizing principles are flawed, just as it is hard to construct good theories without knowing a lot about the real world. Everyone uses theories-whether he or she knows it or not-and disagreements about policy usually rest on more fundamental disagreements about the basic forces that shape international outcomes.", "title": "" }, { "docid": "f01233a2f3ad749704649ead44e60cba", "text": "The species of the pseudophyllidean genus Bothriocephalus Rudolphi, 1808 parasitising freshwater fishes in America are revised, based on the examination of type and voucher specimens of seven taxa. There are five valid species: Bothriocephalus claviceps (Goeze, 1782), B. cuspidatus Cooper, 1917, B. formosus Mueller & Van Cleave, 1932, B. acheilognathi Yamaguti, 1934, and B. pearsei Scholz, Vargas-Vázquez & Moravec, 1996. B. texomensis Self, 1954 from Hiodon alosoides in the USA, and B. musculosus Baer, 1937 from a cichlid Cichlasoma biocellatum (= C. octofasciatum) which died in an aquarium in Switzerland, are synonymised with B. cuspidatus. B. schilbeodis Cheng & James, 1960 from Schilbeodes insignis in the USA, B. speciosus (Leidy, 1858) Leidy, 1872 from Boleostoma olmstedi in the USA, and B. cestus Leidy, 1885 from Salvelinus sp. in Canada are considered to be species inquirendae until new material for the evaluation of their taxonomic status is available. B. cordiceps (Leidy, 1872) from Salmo (= Salvelinus) fontinalis in North America is in fact a larva (plerocercoid) of a Diphyllobothrium species. The study showed that there have been many misidentifications, mostly of B. cuspidatus erroneously designated as B. formosus or B. claviceps. The five valid species are redescribed and illustrated, with emphasis on scolex morphology. The distribution of individual taxa and the spectrum of their definitive hosts are briefly reviewed and a key facilitating identification of individual species is also provided.", "title": "" }, { "docid": "1c83ce2568af5cc3679b69282b25c35d", "text": "A useful ability for search engines is to be able to rank objects with novelty and diversity: the top k documents retrieved should cover possible intents of a query with some distribution, or should contain a diverse set of subtopics related to the user’s information need, or contain nuggets of information with little redundancy. Evaluation measures have been introduced to measure the effectiveness of systems at this task, but these measures have worst-case NP-hard computation time. The primary consequence of this is that there is no ranking principle akin to the Probability Ranking Principle for document relevance that provides uniform instruction on how to rank documents for novelty and diversity. We use simulation to investigate the practical implications of this for optimization and evaluation of retrieval systems.", "title": "" }, { "docid": "30b508c7b576c88705098ac18657664b", "text": "The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.", "title": "" }, { "docid": "60abc52c4953a01d7964b63dde2d8935", "text": "This article proposes a security authentication process that is well-suited for Vehicular Ad-hoc Networks (VANET). As compared to current Public Key Infrastructure (PKI) proposals for VANET authentication, the scheme is significantly more efficient with regard to bandwidth and computation. The scheme uses time as the creator of asymmetric knowledge. A sender creates a long chain of keys. Each key is used for only a short period of time to sign messages. When a key expires, it is publicly revealed, and then never again used. (The sender subsequently uses the next key in its chain to sign future messages.) Upon receiving a revealed key, recipients authenticate previously received messages. The root of a sender’s keychain is given in a certificate signed by an authority. This article describes several possible certificate exchange methods. It also addresses privacy issues in VANET, specifically the tension between anonymity and the ability to revoke certificates.", "title": "" }, { "docid": "df97dff1e2539f192478f2aa91f69cc4", "text": "Computer systems are increasingly employed in circumstances where their failure (or even their correct operation, if they are built to flawed requirements) can have serious consequences. There is a surprising diversity of opinion concerning the properties that such “critical systems” should possess, and the best methods to develop them. The dependability approach grew out of the tradition of ultra-reliable and fault-tolerant systems, while the safety approach grew out of the tradition of hazard analysis and system safety engineering. Yet another tradition is found in the security community, and there are further specialized approaches in the tradition of real-time systems. In this report, I examine the critical properties considered in each approach, and the techniques that have been developed to specify them and to ensure their satisfaction. Since systems are now being constructed that must satisfy several of these critical system properties simultaneously, there is particular interest in the extent to which techniques from one tradition support or conflict with those of another, and in whether certain critical system properties are fundamentally compatible or incompatible with each other. As a step toward improved understanding of these issues, I suggest a taxonomy, based on Perrow’s analysis, that considers the complexity of component interactions and tightness of coupling as primary factors. C. Perrow. Normal Accidents: Living with High Risk Technologies. Basic Books, New York, NY, 1984.", "title": "" }, { "docid": "ddae1c6469769c2c7e683bfbc223ad1a", "text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.", "title": "" } ]
scidocsrr
0f3b88a34e7a921f1d5f261111105a97
From micro to macro: data driven phenotyping by densification of longitudinal electronic medical records
[ { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" }, { "docid": "01835769f2dc9391051869374e200a6a", "text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.", "title": "" } ]
[ { "docid": "a3cf141ce82d39f8368e4465fc01c0c5", "text": "Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. However, one necessary ingredient for natural interaction is still missing - emotions. This paper describes the problem of bimodal emotion recognition and advocates the use of probabilistic graphical models when fusing the different modalities. We test our audio-visual emotion recognition approach on 38 subjects with 11 HCI-related affect states. The experimental results show that the average person-dependent emotion recognition accuracy is greatly improved when both visual and audio information are used in classification", "title": "" }, { "docid": "77b1507ce0e732b3ac93d83f1a5971b3", "text": "Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technology for high data rate communication system. The basic principle of OFDM i s to divide the available spectrum into parallel channel s in order to transmit data on these channels at a low rate. The O FDM concept is based on the fact that the channels refe rr d to as carriers are orthogonal to each other. Also, the fr equency responses of the parallel channels are overlapping. The aim of this paper is to simulate, using GNU Octave, an OFD M transmission under Additive White Gaussian Noise (AWGN) and/or Rayleigh fading and to analyze the effects o f these phenomena.", "title": "" }, { "docid": "1ef1e20f24fa75b40bcc88a40a544c5b", "text": "Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. Monitoring grid resources is a lively research area given the challenges and manifold applications. The aim of this paper is to advance the understanding of grid monitoring by introducing the involved concepts, requirements, phases, and related standardisation activities, including Global Grid Forum’s Grid Monitoring Architecture. Based on a refinement of the latter, the paper proposes a taxonomy of grid monitoring systems, which is employed to classify a wide range of projects and frameworks. The value of the offered taxonomy lies in that it captures a given system’s scope, scalability, generality and flexibility. The paper concludes with, among others, a discussion of the considered systems, as well as directions for future research. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "24e380a79c5520a4f656ff2177d43dd7", "text": "a r t i c l e i n f o Social media have increasingly become popular platforms for information dissemination. Recently, companies have attempted to take advantage of social advertising to deliver their advertisements to appropriate customers. The success of message propagation in social media depends greatly on the content relevance and the closeness of social relationships. In this paper, considering the factors of user preference, network influence , and propagation capability, we propose a diffusion mechanism to deliver advertising information over microblogging media. Our experimental results show that the proposed model could provide advertisers with suitable targets for diffusing advertisements continuously and thus efficiently enhance advertising effectiveness. In recent years, social media, such as Facebook, Twitter and Plurk, have flourished and raised much attention. Social media provide users with an excellent platform to share and receive information and give marketers a great opportunity to diffuse information through numerous populations. An overwhelming majority of mar-keters are using social media to market their businesses, and a significant 81% of these marketers indicate that their efforts in social media have generated effective exposure for their businesses [59]. With effective vehicles for understanding customer behavior and new hybrid elements of the promotion mix, social media allow enterprises to make timely contact with the end-consumer at relatively low cost and higher levels of efficiency [52]. Since the World Wide Web (Web) is now the primary message delivering medium between advertisers and consumers, it is a critical issue to find the best way to utilize on-line media for advertising purposes [18,29]. The effectiveness of advertisement distribution highly relies on well understanding the preference information of the targeted users. However, some implicit personal information of users, particularly the new users, may not be always obtainable to the marketers [23]. As users know more about their friends than marketers, the relations between the users become a natural medium and filter for message diffusion. Moreover, most people are willing to share their information with friends and are likely to be affected by the opinions of their friends [35,45]. Social advertising is a kind of recommendation system, of sharing information between friends. It takes advantage of the relation of users to conduct an advertising campaign. In 2010, eMarketer reported that 90% of consumers rely on recommendations from people they trust. In the same time, IDG Amplify indicated that the efficiency of social advertising is greater than the traditional …", "title": "" }, { "docid": "8fabb9fe465fe70753fe4f035e4513f1", "text": "Gait energy images (GEIs) and its variants form the basis of many recent appearance-based gait recognition systems. The GEI combines good recognition performance with a simple implementation, though it suffers problems inherent to appearance-based approaches, such as being highly view dependent. In this paper, we extend the concept of the GEI to 3D, to create what we call the gait energy volume, or GEV. A basic GEV implementation is tested on the CMU MoBo database, showing improvements over both the GEI baseline and a fused multi-view GEI approach. We also demonstrate the efficacy of this approach on partial volume reconstructions created from frontal depth images, which can be more practically acquired, for example, in biometric portals implemented with stereo cameras, or other depth acquisition systems. Experiments on frontal depth images are evaluated on an in-house developed database captured using the Microsoft Kinect, and demonstrate the validity of the proposed approach.", "title": "" }, { "docid": "ec189ac55b64402d843721de4fc1f15c", "text": "DroidMiner is a new malicious Android app detection system that uses static analysis to automatically mine malicious program logic from known Android malware. DroidMiner uses a behavioral graph to abstract malware program logic into a sequence of threat modalities, and then applies machine-learning techniques to identify and label elements of the graph that match harvested threat modalities. Once trained on a mobile malware corpus, DroidMiner can automatically scan a new Android app to (i) determine whether it contains malicious modalities, (ii) diagnose the malware family to which it is most closely associated, and (iii) precisely characterize behaviors found within the analyzed app. While DroidMiner is not the first to attempt automated classification of Android applications based on Framework API calls, it is distinguished by its development of modalities that are resistant to noise insertions and its use of associative rule mining that enables automated association of malicious behaviors with modalities. We evaluate DroidMiner using 2,466 malicious apps, identified from a corpus of over 67,000 third-party market Android apps, plus an additional set of over 10,000 official market Android apps. Using this set of real-world apps, DroidMiner achieves a 95.3% detection rate, with a 0.4% false positive rate. We further evaluate DroidMiner’s ability to classify malicious apps under their proper family labels, and measure its label accuracy at 92%.", "title": "" }, { "docid": "f29b8c75a784a71dfaac5716017ff4f3", "text": "The objective of this paper is to design a multi-agent system architecture for the Scrum methodology. Scrum is an iterative, incremental framework for software development which is flexible, adaptable and highly productive. An agent is a system situated within and a part of an environment that senses the environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future (Franklin and Graesser, 1996). To our knowledge, this is first attempt to include software agents in the Scrum framework. Furthermore, our design covers all the stages of software development. Alternative approaches were only restricted to the analysis and design phases. This Multi-Agent System (MAS) Architecture for Scrum acts as a design blueprint and a baseline architecture that can be realised into a physical implementation by using an appropriate agent development framework. The development of an experimental prototype for the proposed MAS Architecture is in progress. It is expected that this tool will provide support to the development team who will no longer be expected to report, update and manage non-core activities daily.", "title": "" }, { "docid": "84dd3682a7cd1ea88b6d6588e46078ad", "text": "OBJECTIVES\nThe purpose of this exploratory study was to see if meaning in life is associated with mortality in old age.\n\n\nMETHODS\nInterviews were conducted with a nationwide sample of older adults (N = 1,361). Data were collected on meaning in life, mortality, and select control measures.\n\n\nRESULTS\nThree main findings emerged from this study. First, the data suggest that older people with a strong sense of meaning in life are less likely to die over the study follow-up period than those who do not have a strong sense of meaning. Second, the findings indicate that the effect of meaning on mortality can be attributed to the potentially important indirect effect that operates through health. Third, further analysis revealed that one dimension of meaning-having a strong sense of purpose in life--has a stronger relationship with mortality than other facets of meaning. The main study findings were observed after the effects of attendance at religious services and emotional support were controlled statistically.\n\n\nDISCUSSION\nIf the results from this study can be replicated, then interventions should be designed to help older people find a greater sense of purpose in life.", "title": "" }, { "docid": "c7d3381b32e6a6bbe3ea9d9b870ce1d2", "text": "Software defect prediction plays an important role in improving software quality and it help to reducing time and cost for software testing. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. The ability of a machine to improve its performance based on previous results. Machine learning improves efficiency of human learning, discover new things or structure that is unknown to humans and find important information in a document. For that purpose, different machine learning techniques are used to remove the unnecessary, erroneous data from the dataset. Software defect prediction is seen as a highly important ability when planning a software project and much greater effort is needed to solve this complex problem using a software metrics and defect dataset. Metrics are the relationship between the numerical value and it applied on the software therefore it is used for predicting defect. The primary goal of this survey paper is to understand the existing techniques for predicting software defect.", "title": "" }, { "docid": "38fccd4fd4a18c4c4bc9575092a24a3e", "text": "We investigate the problem of human identity and gender recognition from gait sequences with arbitrary walking directions. Most current approaches make the unrealistic assumption that persons walk along a fixed direction or a pre-defined path. Given a gait sequence collected from arbitrary walking directions, we first obtain human silhouettes by background subtraction and cluster them into several clusters. For each cluster, we compute the cluster-based averaged gait image as features. Then, we propose a sparse reconstruction based metric learning method to learn a distance metric to minimize the intra-class sparse reconstruction errors and maximize the inter-class sparse reconstruction errors simultaneously, so that discriminative information can be exploited for recognition. The experimental results show the efficacy of our approach.", "title": "" }, { "docid": "3f5097b33aab695678caca712b649a8f", "text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.", "title": "" }, { "docid": "e50ba614fc997f058f8d495b59c18af5", "text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.", "title": "" }, { "docid": "50f09f5b2e579e878f041f136bafe07e", "text": "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "title": "" }, { "docid": "d6aa7df08694089a6e0e8030be374c20", "text": "Human pluripotent stem cells (hPSCs) offer a unique platform for elucidating the genes and molecular pathways that underlie complex traits and diseases. To realize this promise, methods for rapid and controllable genetic manipulations are urgently needed. By combining two newly developed gene-editing tools, the TALEN and CRISPR/Cas systems, we have developed a genome-engineering platform in hPSCs, which we named iCRISPR. iCRISPR enabled rapid and highly efficient generation of biallelic knockout hPSCs for loss-of-function studies, as well as homozygous knockin hPSCs with specific nucleotide alterations for precise modeling of disease conditions. We further demonstrate efficient one-step generation of double- and triple-gene knockout hPSC lines, as well as stage-specific inducible gene knockout during hPSC differentiation. Thus the iCRISPR platform is uniquely suited for dissection of complex genetic interactions and pleiotropic gene functions in human disease studies and has the potential to support high-throughput genetic analysis in hPSCs.", "title": "" }, { "docid": "8372f42c70b3790757f4f1d5535cebc1", "text": "WiFi positioning system has been studying in many fields since the past. Recently, a lot of mobile companies are competing for smartphones. Accordingly, this paper proposes an indoor WiFi positioning system using Android-based smartphones.", "title": "" }, { "docid": "b6b5afb72393e89c211bac283e39d8a3", "text": "In order to promote the use of mushrooms as source of nutrients and nutraceuticals, several experiments were performed in wild and commercial species. The analysis of nutrients included determination of proteins, fats, ash, and carbohydrates, particularly sugars by HPLC-RI. The analysis of nutraceuticals included determination of fatty acids by GC-FID, and other phytochemicals such as tocopherols, by HPLC-fluorescence, and phenolics, flavonoids, carotenoids and ascorbic acid, by spectrophotometer techniques. The antimicrobial properties of the mushrooms were also screened against fungi, Gram positive and Gram negative bacteria. The wild mushroom species proved to be less energetic than the commercial sp., containing higher contents of protein and lower fat concentrations. In general, commercial species seem to have higher concentrations of sugars, while wild sp. contained lower values of MUFA but also higher contents of PUFA. alpha-Tocopherol was detected in higher amounts in the wild species, while gamma-tocopherol was not found in these species. Wild mushrooms revealed a higher content of phenols but a lower content of ascorbic acid, than commercial mushrooms. There were no differences between the antimicrobial properties of wild and commercial species. The ongoing research will lead to a new generation of foods, and will certainly promote their nutritional and medicinal use.", "title": "" }, { "docid": "1b556f4e0c69c81780973a7da8ba2f8e", "text": "We explore ways of allowing for the offloading of computationally rigorous tasks from devices with slow logical processors onto a network of anonymous peer-processors. Recent advances in secret sharing schemes, decentralized consensus mechanisms, and multiparty computation (MPC) protocols are combined to create a P2P MPC market. Unlike other computational ”clouds”, ours is able to generically compute any arithmetic circuit, providing a viable platform for processing on the semantic web. Finally, we show that such a system works in a hostile environment, that it scales well, and that it adapts very easily to any future advances in the complexity theoretic cryptography used. Specifically, we show that the feasibility of our system can only improve, and is historically guaranteed to do so.", "title": "" }, { "docid": "5bde20f5c0cad9bf14bec276b59c9054", "text": "Energy conversion of sunlight by photosynthetic organisms has changed Earth and life on it. Photosynthesis arose early in Earth's history, and the earliest forms of photosynthetic life were almost certainly anoxygenic (non-oxygen evolving). The invention of oxygenic photosynthesis and the subsequent rise of atmospheric oxygen approximately 2.4 billion years ago revolutionized the energetic and enzymatic fundamentals of life. The repercussions of this revolution are manifested in novel biosynthetic pathways of photosynthetic cofactors and the modification of electron carriers, pigments, and existing and alternative modes of photosynthetic carbon fixation. The evolutionary history of photosynthetic organisms is further complicated by lateral gene transfer that involved photosynthetic components as well as by endosymbiotic events. An expanding wealth of genetic information, together with biochemical, biophysical, and physiological data, reveals a mosaic of photosynthetic features. In combination, these data provide an increasingly robust framework to formulate and evaluate hypotheses concerning the origin and evolution of photosynthesis.", "title": "" }, { "docid": "05fa9ab12a14f5624ab532c9c034bbb8", "text": "This paper presents the design, implementation, characterization and recording results of a wireless, batteryless microsystem for neural recording on rat, with implantable grid electrode and 3-dimensional probe array. The former provides brain surface ECoG acquisition, while the latter achieves 3D extracellular recording in the 3D target volume of tissue. The microsystem addressed the aforementioned properties by combining MEMS neural sensors, low-power circuit designs and commercial chips into system-level integration.", "title": "" }, { "docid": "e1bb6bcd75b14e970c461ef0b55dc9fe", "text": "The aim of this study was to assess and compare the body image of breast cancer patients (n = 70) whom underwent breast conserving surgery or mastectomy, as well as to compare patients’ scores with that of a sample of healthy control women (n = 70). A secondary objective of this study was to examine the reliability and validity of the 10-item Greek version of the Body Image Scale, a multidimensional measure of body image changes and concerns. Exploratory and confirmatory factor analyses on the items of this scale resulted in a two factor solution, indicating perceived attractiveness, and body and appearance satisfaction. Comparison of the two surgical groups revealed that women treated with mastectomy felt less attractive and more self-conscious, did not like their overall appearance, were dissatisfied with their scar, and avoided contact with people. Hierarchical regression analysis showed that more general body image concerns were associated with belonging to the mastectomy group, compared to the cancer-free group of women. Implications for clinical practice and recommendations for future investigations are discussed.", "title": "" } ]
scidocsrr
264a873b8e345efaf1a04b01c877b957
Video Normals from Colored Lights
[ { "docid": "df2b4b46461d479ccf3d24d2958f81fd", "text": "This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including surfaces with both varying diffuse and specular properties. Our optimization-based method builds on the observation that most objects are composed of a small number of fundamental materials by constraining each pixel to be representable by a combination of at most two such materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding accurate rerenderings under novel lighting conditions for a wide variety of objects. We demonstrate examples of interactive editing operations made possible by our approach.", "title": "" } ]
[ { "docid": "1e8466199d3ac46c0005551204d017bf", "text": "Learned local descriptors based on Convolutional Neural Networks (CNNs) have achieved significant improvements on patch-based benchmarks, whereas not having demonstrated strong generalization ability on recent benchmarks of image-based 3D reconstruction. In this paper, we mitigate this limitation by proposing a novel local descriptor learning approach that integrates geometry constraints from multi-view reconstructions, which benefits the learning process in terms of data generation, data sampling and loss computation. We refer to the proposed descriptor as GeoDesc, and demonstrate its superior performance on various large-scale benchmarks, and in particular show its great success on challenging reconstruction tasks. Moreover, we provide guidelines towards practical integration of learned descriptors in Structurefrom-Motion (SfM) pipelines, showing the good trade-off that GeoDesc delivers to 3D reconstruction tasks between accuracy and efficiency.", "title": "" }, { "docid": "dfb9c31c73f1ca5849f6f78c80d9fd55", "text": "Handing over objects to humans is an essential capability for assistive robots. While there are infinite ways to hand an object, robots should be able to choose the one that is best for the human. In this paper we focus on choosing the robot and object configuration at which the transfer of the object occurs, i.e. the hand-over configuration. We advocate the incorporation of user preferences in choosing hand-over configurations. We present a user study in which we collect data on human preferences and a human-robot interaction experiment in which we compare hand-over configurations learned from human examples against configurations planned using a kinematic model of the human. We find that the learned configurations are preferred in terms of several criteria, however planned configurations provide better reachability. Additionally, we find that humans prefer hand-overs with default orientations of objects and we identify several latent variables about the robot's arm that capture significant human preferences. These findings point towards planners that can generate not only optimal but also preferable hand-over configurations for novel objects.", "title": "" }, { "docid": "722c18701e7c8b9a054a9603eb6bf8f4", "text": "We report in this case-study paper our experience and success story with a practical approach and tool for unit regression testing of a SCADA (Supervisory Control and Data Acquisition) software. The tool uses a black-box specification of the units under test to automatically generate NUnit test code. We then improved the test suite by white-box and mutation testing. The approach and tool were developed in an action-research project to test a commercial large-scale SCADA system called Rocket.", "title": "" }, { "docid": "1fcdfd02a6ecb12dec5799d6580c67d4", "text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.", "title": "" }, { "docid": "0aa85d4ac0f2034351d5ba690929db19", "text": "The quantity of small scale solar photovoltaic (PV) arrays in the United States has grown rapidly in recent years. As a result, there is substantial interest in high quality information about the quantity, power capacity, and energy generated by such arrays, including at a high spatial resolution (e.g., cities, counties, or other small regions). Unfortunately, existing methods for obtaining this information, such as surveys and utility interconnection filings, are limited in their completeness and spatial resolution. This work presents a computer algorithm that automatically detects PV panels using very high resolution color satellite imagery. The approach potentially offers a fast, scalable method for obtaining accurate information on PV array location and size, and at much higher spatial resolutions than are currently available. The method is validated using a very large (135 km) collection of publicly available (Bradbury et al., 2016) aerial imagery, with over 2700 human annotated PV array locations. The results demonstrate the algorithm is highly effective on a per-pixel basis. It is likewise effective at object-level PV array detection, but with significant potential for improvement in estimating the precise shape/size of the PV arrays. These results are the first of their kind for the detection of solar PV in aerial imagery, demonstrating the feasibility of the approach and establishing a baseline performance for future investigations. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f629f426943b995a304f3d35b7090cda", "text": "We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than languagespecific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-ofthe-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning “from scratch” in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.", "title": "" }, { "docid": "3bebd1c272b1cba24f6aeeabaa5c54d2", "text": "Cloacal anomalies occur when failure of the urogenital septum to separate the cloacal membrane results in the urethra, vagina, rectum and anus opening into a single common channel. The reported incidence is 1:50,000 live births. Short-term paediatric outcomes of surgery are well reported and survival into adulthood is now usual, but long-term outcome data are less comprehensive. Chronic renal failure is reported to occur in 50 % of patients with cloacal anomalies, and 26–72 % (dependant on the length of the common channel) of patients experience urinary incontinence in adult life. Defaecation is normal in 53 % of patients, with some managed by methods other than surgery, including medication, washouts, stoma and antegrade continent enema. Gynaecological anomalies are common and can necessitate reconstructive surgery at adolescence for menstrual obstruction. No data are currently available on sexual function and little on the quality of life. Pregnancy is extremely rare and highly risky. Patient care should be provided by a multidisciplinary team with experience in managing these and other related complex congenital malformations. However, there is an urgent need for a well-planned, collaborative multicentre prospective study on the urological, gastrointestinal and gynaecological aspects of this rare group of complex conditions.", "title": "" }, { "docid": "b47b06f8548716e0ef01a0e113d48e5d", "text": "This paper proposes a framework to automatically construct taxonomies from a corpus of text documents. This framework first extracts terms from documents using a part-of-speech parser. These terms are then filtered using domain pertinence, domain consensus, lexical cohesion, and structural relevance. The remaining terms represent concepts in the taxonomy. These concepts are arranged in a hierarchy with either the extended subsumption method that accounts for concept ancestors in determining the parent of a concept or a hierarchical clustering algorithm that uses various text-based window and document scopes for concept co-occurrences. Our evaluation in the field of management and economics indicates that a trade-off between taxonomy quality and depth must be made when choosing one of these methods. The subsumption method is preferable for shallow taxonomies, whereas the hierarchical clustering algorithm is recommended for deep taxonomies.", "title": "" }, { "docid": "2bd9c5e042feb6a8a6ab0b1c6e97b06f", "text": "Stacie and Meg—juniors at Atlas High School—soon must submit their course requests for next year. They have completed 3 years of science as mandated by the school system and must decide whether to take additional courses. Physics is an option, and although it is not required they believe that taking it may help with college admission. To date they have received similar grades (As and Bs) in science courses. The night before the class sign-up date they discuss the situation with their parents. Meg's dad feels that she should take physics since it will help her understand how the world works. Meg notes that Ms. Blakely (the physics teacher) is not very good. After further discussion, however, Meg concludes that she feels confident about learning physics because she always has been able to learn science in the past and that if she does not understand something she will ask the teacher. So Meg decides to sign up for it. Stacie, on the other hand, tells her parents that she just does not feel smart enough to learn or do well in physics and that because Ms. Blakely is not a good teacher Stacie would not receive much help from her. Stacie also tells her parents that few girls take the course. Under no pressure from her parents, Stacie decides she will not sign up for physics.", "title": "" }, { "docid": "a45109840baf74c61b5b6b8f34ac81d5", "text": "Decision-making groups can potentially benefit from pooling members' information, particularly when members individually have partial and biased information but collectively can compose an unbiased characterization of the decision alternatives. The proposed biased sampling model of group discussion, however, suggests that group members often fail to effectively pool their information because discussion tends to be dominated by (a) information that members hold in common before discussion and (b) information that supports members' existent preferences. In a political caucus simulation, group members individually read candidate descriptions that contained partial information biased against the most favorable candidate and then discussed the candidates as a group. Even though groups could have produced unbiased composites of the candidates through discussion, they decided in favor of the candidate initially preferred by a plurality rather than the most favorable candidate. Group members' preand postdiscussion recall of candidate attributes indicated that discussion tended to perpetuate, not to correct, members' distorted pictures of the candidates.", "title": "" }, { "docid": "464f7d25cb2a845293a3eb8c427f872f", "text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.", "title": "" }, { "docid": "79b3ed4c5e733c73b5e7ebfdf6069293", "text": "This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.", "title": "" }, { "docid": "4abd7884b97c1af7c24a81da7a6c0c3d", "text": "AIM\nThe interaction between running, stretching and practice jumps during warm-up for jumping tests has not been investigated. The purpose of the present study was to compare the effects of running, static stretching of the leg extensors and practice jumps on explosive force production and jumping performance.\n\n\nMETHODS\nSixteen volunteers (13 male and 3 female) participated in five different warm-ups in a randomised order prior to the performance of two jumping tests. The warm-ups were control, 4 min run, static stretch, run + stretch, and run + stretch + practice jumps. After a 2 min rest, a concentric jump and a drop jump were performed, which yielded 6 variables expressing fast force production and jumping performance of the leg extensor muscles (concentric jump height, peak force, rate of force developed, drop jump height, contact time and height/time).\n\n\nRESULTS\nGenerally the stretching warm-up produced the lowest values and the run or run + stretch + jumps warm-ups produced the highest values of explosive force production. There were no significant differences (p<0.05) between the control and run + stretch warm-ups, whereas the run yielded significantly better scores than the run + stretch warm-up for drop jump height (3.2%), concentric jump height (3.4%) and peak concentric force (2.7%) and rate of force developed (15.4%).\n\n\nCONCLUSION\nThe results indicated that submaximum running and practice jumps had a positive effect whereas static stretching had a negative influence on explosive force and jumping performance. It was suggested that an alternative for static stretching should be considered in warm-ups prior to power activities.", "title": "" }, { "docid": "bf0cfb73aad56e56773e0788d6111208", "text": "Successful open source communities are constantly looking for new members and helping them become active developers. A common approach for developer onboarding in open source projects is to let newcomers focus on relevant yet easy-to-solve issues to familiarize themselves with the code and the community. The goal of this research is twofold. First, we aim at automatically identifying issues that newcomers can resolve by analyzing the history of resolved issues by simply using the title and description of issues. Second, we aim at automatically identifying issues, that can be resolved by newcomers who later become active developers. We mined the issue trackers of three large open source projects and extracted natural language features from the title and description of resolved issues. In a series of experiments, we optimized and compared the accuracy of four supervised classifiers to address our research goals. Random Forest, achieved up to 91% precision (F1-score 72%) towards the first goal while for the second goal, Decision Tree achieved a precision of 92% (F1-score 91%). A qualitative evaluation gave insights on what information in the issue description is helpful for newcomers. Our approach can be used to automatically identify, label, and recommend issues for newcomers in open source software projects based only on the text of the issues.", "title": "" }, { "docid": "d799257d4a78401bf25e492250b64da8", "text": "We examined anticipatory mechanisms of reward-motivated memory formation using event-related FMRI. In a monetary incentive encoding task, cues signaled high- or low-value reward for memorizing an upcoming scene. When tested 24 hr postscan, subjects were significantly more likely to remember scenes that followed cues for high-value rather than low-value reward. A monetary incentive delay task independently localized regions responsive to reward anticipation. In the encoding task, high-reward cues preceding remembered but not forgotten scenes activated the ventral tegmental area, nucleus accumbens, and hippocampus. Across subjects, greater activation in these regions predicted superior memory performance. Within subject, increased correlation between the hippocampus and ventral tegmental area was associated with enhanced long-term memory for the subsequent scene. These findings demonstrate that brain activation preceding stimulus encoding can predict declarative memory formation. The findings are consistent with the hypothesis that reward motivation promotes memory formation via dopamine release in the hippocampus prior to learning.", "title": "" }, { "docid": "29ce7251e5237b0666cef2aee7167126", "text": "Chinese characters have a huge set of character categories, more than 20, 000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a compact set of about 500 fundamental and structural radicals. This paper introduces a novel radical analysis network (RAN) to recognize printed Chinese characters by identifying radicals and analyzing two-dimensional spatial structures among them. The proposed RAN first extracts visual features from input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, aiming at generating captions of Chinese characters by detecting radicals and two-dimensional structures through a spatial attention mechanism. The manner of treating a Chinese character as a composition of radicals rather than a single character class largely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes, namely zero-shot learning.", "title": "" }, { "docid": "c760e6db820733dc3f57306eef81e5c9", "text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.", "title": "" }, { "docid": "019c2d5927e54ae8ce3fc7c5b8cff091", "text": "In this paper, we present Affivir, a video browsing system that recommends Internet videos that match a user’s affective preference. Affivir models a user’s watching behavior as sessions, and dynamically adjusts session parameters to cater to the user’s current mood. In each session, Affivir discovers a user’s affective preference through user interactions, such as watching or skipping videos. Affivir uses video affective features (motion, shot change rate, sound energy, and audio pitch average) to retrieve videos that have similar affective responses. To efficiently search videos of interest from our video repository, all videos in the repository are pre-processed and clustered. Our experimental results shows that Affivir has made a significant improvement in user satisfaction and enjoyment, compared with several other popular baseline approaches.", "title": "" }, { "docid": "1d2f35fb17183a215e864693712fa75b", "text": "Improving the coding efficiency is the eternal theme in video coding field. The traditional way for this purpose is to reduce the redundancies inside videos by adding numerous coding options at the encoder side. However, no matter what we have done, it is still hard to guarantee the optimal coding efficiency. On the other hand, the decoded video can be treated as a certain compressive sampling of the original video. According to the compressive sensing theory, it might be possible to further enhance the quality of the decoded video by some restoration methods. Different from the traditional methods, without changing the encoding algorithm, this paper focuses on an approach to improve the video's quality at the decoder end, which equals to further boosting the coding efficiency. Furthermore, we propose a very deep convolutional neural network to automatically remove the artifacts and enhance the details of HEVC-compressed videos, by utilizing that underused information left in the bit-streams and external images. Benefit from the prowess and efficiency of the fully end-to-end feed forward architecture, our approach can be treated as a better decoder to efficiently obtain the decoded frames with higher quality. Extensive experiments indicate our approach can further improve the coding efficiency post the deblocking and SAO in current HEVC decoder, averagely 5.0%, 6.4%, 5.3%, 5.5% BD-rate reduction for all intra, lowdelay P, lowdelay B and random access configurations respectively. This method can aslo be extended to any video coding standards.", "title": "" }, { "docid": "7adf46bb0a4ba677e58aee9968d06293", "text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.", "title": "" } ]
scidocsrr
10ba715cd3db4f9f338f391d6a0401d7
Challenges in 802.11 encryption algorithms: The need for an adaptive scheme for improved performance
[ { "docid": "2ffd4537f9adff88434c8a2b5860b6a5", "text": "free download the design of rijndael: aes the advanced the design of rijndael aes the advanced encryption publication moved: fips 197, advanced encryption standard rijndael aes paper nist computer security resource the design of rijndael toc beck-shop design and implementation of advanced encryption standard lecture note 4 the advanced encryption standard (aes) selecting the advanced encryption standard implementation of advanced encryption standard (aes implementation of advanced encryption standard algorithm cryptographic algorithms aes cryptography the advanced encryption the successor of des computational and algebraic aspects of the advanced advanced encryption standard security forum 2017 advanced encryption standard 123seminarsonly design of high speed 128 bit aes algorithm for data encryption fpga based implementation of aes encryption and decryption effective comparison and evaluation of des and rijndael advanced encryption standard (aes) and it’s working the long road to the advanced encryption standard fpga implementations of advanced encryption standard a survey a reconfigurable cryptography coprocessor rcc for advanced vlsi design and implementation of pipelined advanced information security and cryptography springer cryptographic algorithms (aes, rsa) polynomials in the nation’s service: using algebra to chapter 19: rijndael: a successor to the data encryption a vlsi architecture for rijndael, the advanced encryption a study of encryption algorithms (rsa, des, 3des and aes design an aes algorithm using s.r & m.c technique alook at the advanced encr yption standard (aes) aes-512: 512-bit advanced encryption standard algorithm some algebraic aspects of the advanced encryption standard global information assurance certification paper design of parallel advanced encryption standard (ae s shared architecture for encryption/decryption of aes iceec2015sp06.pdf an enhanced advanced encryption standard a vhdl implementation of the advanced encryption standard advanced encryption standard ijcset vlsi implementation of enhanced aes cryptography", "title": "" } ]
[ { "docid": "a33ed384b8f4a86e8cc82970c7074bad", "text": "There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation.", "title": "" }, { "docid": "e3db113a2b09ee8c7c093e696c85e6bf", "text": "Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.", "title": "" }, { "docid": "68c31aa73ba8bcc1b3421981877d4310", "text": "Several approaches are available to create cross-platform applications. The majority of these approaches focus on purely mobile platforms. Their principle is to develop the application once and be able to deploy it to multiple mobile platforms with different operating systems (Android (Java), IOS (Objective C), Windows Phone 7 (C#), etc.). In this article, we propose a merged approach and cross-platform called ZCA \"ZeroCouplage Approach\". Merged to regroup the strong points of approaches: \"Runtime\", \"Component-Based\" and \"Cloud-Based\" thank to a design pattern which we created and named M2VC (Model-Virtual-View-Controller). Cross-platform allows creating a unique application that is deployable directly on many platforms: Web, Mobile and Desktop. In this article, we also compare our ZCA approach with others to approve its added value. Our idea, contrary to mobile approaches, consists of a given technology to implement cross-platform applications. To validate our approach, we have developed an open source framework named ZCF \"ZeroCouplage Framework\" for Java technology.", "title": "" }, { "docid": "24e380a79c5520a4f656ff2177d43dd7", "text": "a r t i c l e i n f o Social media have increasingly become popular platforms for information dissemination. Recently, companies have attempted to take advantage of social advertising to deliver their advertisements to appropriate customers. The success of message propagation in social media depends greatly on the content relevance and the closeness of social relationships. In this paper, considering the factors of user preference, network influence , and propagation capability, we propose a diffusion mechanism to deliver advertising information over microblogging media. Our experimental results show that the proposed model could provide advertisers with suitable targets for diffusing advertisements continuously and thus efficiently enhance advertising effectiveness. In recent years, social media, such as Facebook, Twitter and Plurk, have flourished and raised much attention. Social media provide users with an excellent platform to share and receive information and give marketers a great opportunity to diffuse information through numerous populations. An overwhelming majority of mar-keters are using social media to market their businesses, and a significant 81% of these marketers indicate that their efforts in social media have generated effective exposure for their businesses [59]. With effective vehicles for understanding customer behavior and new hybrid elements of the promotion mix, social media allow enterprises to make timely contact with the end-consumer at relatively low cost and higher levels of efficiency [52]. Since the World Wide Web (Web) is now the primary message delivering medium between advertisers and consumers, it is a critical issue to find the best way to utilize on-line media for advertising purposes [18,29]. The effectiveness of advertisement distribution highly relies on well understanding the preference information of the targeted users. However, some implicit personal information of users, particularly the new users, may not be always obtainable to the marketers [23]. As users know more about their friends than marketers, the relations between the users become a natural medium and filter for message diffusion. Moreover, most people are willing to share their information with friends and are likely to be affected by the opinions of their friends [35,45]. Social advertising is a kind of recommendation system, of sharing information between friends. It takes advantage of the relation of users to conduct an advertising campaign. In 2010, eMarketer reported that 90% of consumers rely on recommendations from people they trust. In the same time, IDG Amplify indicated that the efficiency of social advertising is greater than the traditional …", "title": "" }, { "docid": "661e9f25abc38bd60f408cefeeb881e1", "text": "The sirtuins are a highly conserved family of NAD+-dependent enzymes that regulate lifespan in lower organisms. Recently, the mammalian sirtuins have been connected to an ever widening circle of activities that encompass cellular stress resistance, genomic stability, tumorigenesis and energy metabolism. Here we review the recent progress in sirtuin biology, the role these proteins have in various age-related diseases and the tantalizing notion that the activity of this family of enzymes somehow regulates how long we live.", "title": "" }, { "docid": "f4cb0eb6d39c57779cf9aa7b13abef14", "text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.", "title": "" }, { "docid": "435200b067ebd77f69a04cc490d73fa6", "text": "Self-mutilation of genitalia is an extremely rare entity, usually found in psychotic patients. Klingsor syndrome is a condition in which such an act is based upon religious delusions. The extent of genital mutilation can vary from superficial cuts to partial or total amputation of penis to total emasculation. The management of these patients is challenging. The aim of the treatment is restoration of the genital functionality. Microvascular reanastomosis of the phallus is ideal but it is often not possible due to the delay in seeking medical attention, non viability of the excised phallus or lack of surgical expertise. Hence, it is not unusual for these patients to end up with complete loss of the phallus and a perineal urethrostomy. We describe a patient with Klingsor syndrome who presented to us with near total penile amputation. The excised phallus was not viable and could not be used. The patient was managed with surgical reconstruction of the penile stump which was covered with loco-regional flaps. The case highlights that a functional penile reconstruction is possible in such patients even when microvascular reanastomosis is not feasible. This technique should be attempted before embarking upon perineal urethrostomy.", "title": "" }, { "docid": "aee62b585bb8a51b7bd9e0835bce72b4", "text": "Someone said, “It is a bad craftsman that blames his tools.” It should be obvious to the thoughtful observer that the problem may be the implementation of ISD, not a systematic approach itself. At the highest level of a systems approach one cannot imagine a design process that does not identify the training needs of an organization or the learning needs of the students. While learning occurs in many different environments, it is generally agreed that instruction requires that one first identify the goals of the instruction. It is equally difficult to imagine a process that does not involve planning, development, implementation, and evaluation. It is not these essential development activities that are in question but perhaps the fact that their detailed implementation in various incarnations of ISD do not represent the most efficient or effective method for designing instruction. A more significant element is the emphasis on the process involved in developing instruction rather than the basic learning principles that this process should emphasize. Merely following a series of steps, when there is insufficient guidance as to quality, is likely to result in an inferior product. A technology involves not only the steps involved but a set of specifications for what each step is to accomplish. Perhaps many ISD implementations have had insufficient specifications for the products of the process.", "title": "" }, { "docid": "4284e9bbe3bf4c50f9e37455f1118e6b", "text": "A longevity revolution (Butler, 2008) is occurring across the globe. Because of factors ranging from the reduction of early-age mortality to an increase in life expectancy at later ages, most of the world’s population is now living longer than preceding generations (Bengtson, 2014). There are currently more than 44 million older adults—typically defined as persons 65 years and older—living in the United States, and this number is expected to increase to 98 million by 2060 (Administration on Aging, 2016). Although most older adults report higher levels of life satisfaction than do younger or middle-aged adults (George, 2010), between 5.6 and 8 million older Americans have a diagnosable mental health or substance use disorder (Bartels & Naslund, 2013). Furthermore, because of the rapid growth of the older adult population, this figure is expected to nearly double by 2030 (Bartels & Naslund, 2013). Mental health care is effective for older adults, and evidence-based treatments exist to address a broad range of issues, including anxiety disorders, depression, sleep disturbances, substance abuse, and some symptoms of dementia (Myers & Harper, 2004). Counseling interventions may also be beneficial for nonclinical life transitions, such as coping with loss, adjusting to retirement and a reduced income, and becoming a grandparent (Myers & Harper, 2004). Yet, older adults are underserved when it comes to mental", "title": "" }, { "docid": "c86b01c42f54053acf69c7ea3495c330", "text": "Opioids are central analgesics that act on the CNS (central nervous system) and PNS (peripheral nervous system). We investigated the effects of codeine (COD) and tramadol (TRAM) on local anesthesia of the sciatic nerve. Eighty Wistar male rats received the following SC injections in the popliteal fossa: local anesthetic with epinephrine (LA); local anesthetic without vasoconstrictor (LA WV); COD; TRAM; LA + COD; LA + TRAM; COD 20 minutes prior to LA (COD 20' + LA) or TRAM 20 minutes prior to LA (TRAM 20' + LA). As a nociceptive function, the blockade was considered the absence of a paw withdraw reflex. As a motor function, it was the absence of claudication. As a proprioceptive function, it was the absence of hopping and tactile responses. All data were compared using repeated-measures analysis of variance (ANOVA). Opioids showed a significant increase in the level of anesthesia, and the blockade duration of LA + COD was greater than that of the remaining groups (p < 0.05). The associated use of opioids improved anesthesia efficacy. This could lead to a new perspective in controlling dental pain.", "title": "" }, { "docid": "b90563b5f6d2b606d335222eb06d0b9a", "text": "Ensuring differential privacy of models learned from sensitive user data is an important goal that has been studied extensively in recent years. It is now known that for some basic learning problems, especially those involving high-dimensional data, producing an accurate private model requires much more data than learning without privacy. At the same time, in many applications it is not necessary to expose the model itself. Instead users may be allowed to query the prediction model on their inputs only through an appropriate interface. Here we formulate the problem of ensuring privacy of individual predictions and investigate the overheads required to achieve it in several standard models of classification and regression. We first describe a simple baseline approach based on training several models on disjoint subsets of data and using standard private aggregation techniques to predict. We show that this approach has nearly optimal sample complexity for (realizable) PAC learning of any class of Boolean functions. At the same time, without strong assumptions on the data distribution, the aggregation step introduces a substantial overhead. We demonstrate that this overhead can be avoided for the well-studied class of thresholds on a line and for a number of standard settings of convex regression. The analysis of our algorithm for learning thresholds relies crucially on strong generalization guarantees that we establish for all differentially private prediction algorithms.", "title": "" }, { "docid": "f2aff84f10b59cbc127dab6266cee11c", "text": "This paper extends the Argument Interchange Format to enable it to represent dialogic argumentation. One of the challenges is to tie together the rules expressed in dialogue protocols with the inferential relations between premises and conclusions. The extensions are founded upon two important analogies which minimise the extra ontological machinery required. First, locutions in a dialogue are analogous to AIF Inodes which capture propositional data. Second, steps between locutions are analogous to AIF S-nodes which capture inferential movement. This paper shows how these two analogies combine to allow both dialogue protocols and dialogue histories to be represented alongside monologic arguments in a single coherent system.", "title": "" }, { "docid": "37f5fcde86e30359e678ff3f957e3c7e", "text": "A Phase I dose-proportionality study is an essential tool to understand drug pharmacokinetic dose-response relationship in early clinical development. There are a number of different approaches to the assessment of dose proportionality. The confidence interval (CI) criteria approach, a staitistically sound and clinically relevant approach, has been proposed to detect dose-proportionality (Smith, et al. 2000), by which the proportionality is declared if the 90% CI for slope is completely contained within the pre-determined critical interval. This method, enhancing the information from a clinical dose-proportionality study, has gradually drawn attention. However, exact power calculation of dose proportinality studies based on CI criteria poses difficulity for practioners since the methodology was essentailly from two one-sided tests (TOST) procedure for the slope, which should be unit under proportionality. It requires sophisticated numerical integration, and it is not available in statistical software packages. This paper presents a SAS Macro to compute the empirical power for the CI-based dose proportinality studies. The resulting sample sizes and corresponding empirical powers suggest that this approach is powerful in detecting dose-proportionality under commonly used sample sizes for phase I studies.", "title": "" }, { "docid": "1d5e363647bd8018b14abfcc426246bb", "text": "This paper presents a new approach to improve the performance of finger-vein identification systems presented in the literature. The proposed system simultaneously acquires the finger-vein and low-resolution fingerprint images and combines these two evidences using a novel score-level combination strategy. We examine the previously proposed finger-vein identification approaches and develop a new approach that illustrates it superiority over prior published efforts. The utility of low-resolution fingerprint images acquired from a webcam is examined to ascertain the matching performance from such images. We develop and investigate two new score-level combinations, i.e., holistic and nonlinear fusion, and comparatively evaluate them with more popular score-level fusion approaches to ascertain their effectiveness in the proposed system. The rigorous experimental results presented on the database of 6264 images from 156 subjects illustrate significant improvement in the performance, i.e., both from the authentication and recognition experiments.", "title": "" }, { "docid": "0fd635cfbcbd2d648f5c25ce2cb551a5", "text": "The main focus of relational learning for knowledge graph completion (KGC) lies in exploiting rich contextual information for facts. Many state-of-the-art models incorporate fact sequences, entity types, and even textual information. Unfortunately, most of them do not fully take advantage of rich structural information in a KG, i.e., connectivity patterns around each entity. In this paper, we propose a context-aware convolutional learning (CACL) model which jointly learns from entities and their multi-hop neighborhoods. Since we directly utilize the connectivity patterns contained in each multi-hop neighborhood, the structural role similarity among entities can be better captured, resulting in more informative entity and relation embeddings. Specifically, CACL collects entities and relations from the multi-hop neighborhood as contextual information according to their relative importance and uniquely maps them to a linear vector space. Our convolutional architecture leverages a deep learning technique to represent each entity along with its linearly mapped contextual information. Thus, we can elaborately extract the features of key connectivity patterns from the context and incorporate them into a score function which evaluates the validity of facts. Experimental results on the newest datasets show that CACL outperforms existing approaches by successfully enriching embeddings with neighborhood information.", "title": "" }, { "docid": "76afcc3dfbb06f2796b61c8b5b424ad8", "text": "Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results.", "title": "" }, { "docid": "242b854de904075d04e7044e680dc281", "text": "Adopting a motivational perspective on adolescent development, these two companion studies examined the longitudinal relations between early adolescents' school motivation (competence beliefs and values), achievement, emotional functioning (depressive symptoms and anger), and middle school perceptions using both variable- and person-centered analytic techniques. Data were collected from 1041 adolescents and their parents at the beginning of seventh and the end of eight grade in middle school. Controlling for demographic factors, regression analyses in Study 1 showed reciprocal relations between school motivation and positive emotional functioning over time. Furthermore, adolescents' perceptions of the middle school learning environment (support for competence and autonomy, quality of relationships with teachers) predicted their eighth grade motivation, achievement, and emotional functioning after accounting for demographic and prior adjustment measures. Cluster analyses in Study 2 revealed several different patterns of school functioning and emotional functioning during seventh grade that were stable over 2 years and that were predictably related to adolescents' reports of their middle school environment. Discussion focuses on the developmental significance of schooling for multiple adjustment outcomes during adolescence.", "title": "" }, { "docid": "0f613e9c6d2a6ca47d5ed0e6b853735e", "text": "We introduce a novel approach for automatically classifying the sentiment of Twitter messages. These messages are classified as either positive or negative with respect to a query term. This is useful for consumers who want to research the sentiment of products before purchase, or companies that want to monitor the public sentiment of their brands. There is no previous research on classifying sentiment of messages on microblogging services like Twitter. We present the results of machine learning algorithms for classifying the sentiment of Twitter messages using distant supervision. Our training data consists of Twitter messages with emoticons, which are used as noisy labels. This type of training data is abundantly available and can be obtained through automated means. We show that machine learning algorithms (Naive Bayes, Maximum Entropy, and SVM) have accuracy above 80% when trained with emoticon data. This paper also describes the preprocessing steps needed in order to achieve high accuracy. The main contribution of this paper is the idea of using tweets with emoticons for distant supervised learning.", "title": "" }, { "docid": "2d2465aff21421330f82468858a74cf4", "text": "There has been a tremendous increase in popularity and adoption of wearable fitness trackers. These fitness trackers predominantly use Bluetooth Low Energy (BLE) for communicating and syncing the data with user's smartphone. This paper presents a measurement-driven study of possible privacy leakage from BLE communication between the fitness tracker and the smartphone. Using real BLE traffic traces collected in the wild and in controlled experiments, we show that majority of the fitness trackers use unchanged BLE address while advertising, making it feasible to track them. The BLE traffic of the fitness trackers is found to be correlated with the intensity of user's activity, making it possible for an eavesdropper to determine user's current activity (walking, sitting, idle or running) through BLE traffic analysis. Furthermore, we also demonstrate that the BLE traffic can represent user's gait which is known to be distinct from user to user. This makes it possible to identify a person (from a small group of users) based on the BLE traffic of her fitness tracker. As BLE-based wearable fitness trackers become widely adopted, our aim is to identify important privacy implications of their usage and discuss prevention strategies.", "title": "" }, { "docid": "2cacc319693079eb420c51f602dc45ec", "text": "We provide code that produces beautiful poetry. Our sonnet-generation algorithm includes several novel elements that improve over the state-of-the-art, leading to rhythmic and inspiring poems. The work discussed here is the winner of the 2018 PoetiX Literary Turing Test Award for computer-generated poetry.", "title": "" } ]
scidocsrr
0db71867f1cbc8734dadd5d541cf4317
Enhancing Differential Evolution Utilizing Eigenvector-Based Crossover Operator
[ { "docid": "3293e4e0d7dd2e29505db0af6fbb13d1", "text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.", "title": "" } ]
[ { "docid": "43228a3436f23d786ad7faa7776f1e1b", "text": "Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV) include Wegener granulomatosis, microscopic polyangiitis, Churg–Strauss syndrome and renal-limited vasculitis. This Review highlights the progress that has been made in our understanding of AAV pathogenesis and discusses new developments in the treatment of these diseases. Evidence from clinical studies, and both in vitro and in vivo experiments, supports a pathogenic role for ANCAs in the development of AAV; evidence is stronger for myeloperoxidase-ANCAs than for proteinase-3-ANCAs. Neutrophils, complement and effector T cells are also involved in AAV pathogenesis. With respect to treatment of AAV, glucocorticoids, cyclophosphamide and other conventional therapies are commonly used to induce remission in generalized disease. Pulse intravenous cyclophosphamide is equivalent in efficacy to oral cyclophosphamide but seems to be associated with less adverse effects. Nevertheless, alternatives to cyclophosphamide therapy have been investigated, such as the use of methotrexate as a less-toxic alternative to cyclophosphamide to induce remission in non-organ-threatening or non-life-threatening AAV. Furthermore, rituximab is equally as effective as cyclophosphamide for induction of remission in AAV and might become the standard of therapy in the near future. Controlled trials in which specific immune effector cells and molecules are being therapeutically targeted have been initiated or are currently being planned.", "title": "" }, { "docid": "b85ca1bbd3d5224b0e10b2cda433fe8f", "text": "We show that the Graph Isomorphism (GI) problem and its generalizations, the String Isomorphism (SI) and Coset Intersection (CI) problems, can be solved in quasipolynomial (exp ( (log n) ) ) time. The best previous bound for GI was exp(O( √ n log n)), where n is the number of vertices (Luks, 1983); for SI and CI, the bound was similar, exp(Õ( √ n)), where n is the size of the permutation domain (Babai, 1983). The SI problem takes as input two strings, x and y, of length n, and a permutation group G of degree n and asks if some element of G transforms x into y. Our algorithm builds on Luks’s SI framework and attacks its bottleneck, characterized by an epimorphism φ of G onto the alternating group acting on a set Γ of size k > c log n. Our goal is to break this symmetry. The crucial first step is to find a canonical t-ary relational structure on Γ, with not too much symmetry, for some t = O(log n). We say that an element x in the domain of G is affected by φ if φ maps the stabilizer of x to a proper subgroup of Ak. The affected/unaffected dichotomy provides a device to construct global symmetry from local information through the core group-theoretic “local certificates” routine. This algorithm in turn produces the required t-ary structure and thereby sets the stage for symmetry breaking via combinatorial methods of canonical partitioning. The latter lead to the emergence of the Johnson graphs as the sole obstructions to effective canonical partitioning. For a list of updates compared to the first two arXiv versions, see the Acknowledgments (Sec. 18.1). WARNING. While the present version fills significant gaps of the previous versions and improves the presentation of some components of the paper, the revision is incomplete; at the current stage, it includes notational, conceptual, and organizational inconsistencies. A fuller explanation of this disclaimer appears in the Acknowledgments (Sec. 18.1) at the end of the paper. ∗ Research supported in part by NSF Grants CCF-7443327 (2014-current), CCF-1017781 (2010-2014), and CCF-0830370 (2008–2010). Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the author and do not necessarily reflect the views of the National Science Foundation (NSF).", "title": "" }, { "docid": "26162f0e3f6c8752a5dbf7174d2e5e44", "text": "Literature on the combination of qualitative and quantitative research components at the primary empirical study level has recently accumulated exponentially. However, this combination is only rarely discussed and applied at the research synthesis level. The purpose of this paper is to explore the possible contribution of mixed methods research to the integration of qualitative and quantitative research at the synthesis level. In order to contribute to the methodology and utilization of mixed methods at the synthesis level, we present a framework to perform mixed methods research syntheses (MMRS). The presented classification framework can help to inform researchers intending to carry out MMRS, and to provide ideas for conceptualizing and developing those syntheses. We illustrate the use of this framework by applying it to the planning of MMRS on effectiveness studies concerning interventions for challenging behavior in persons with intellectual disabilities, presenting two hypothetical examples. Finally, we discuss possible strengths of MMRS and note some remaining challenges concerning the implementation of these syntheses.", "title": "" }, { "docid": "4c2ab8f148d2e3136d4976b1b88184d5", "text": "In ten years, more than half the world’s population will be living in cities. The United Ž . Nations UN has stated that this will threaten cities with social conflict, environmental degradation and the collapse of basic services. The economic, social, and environmental planning practices of societies embodying ‘urban sustainability’ have been proposed as antidotes to these negative urban trends. ‘Urban sustainability’ is a doctrine with diverse origins. The author believes that the alternative models of cultural development in Curitiba, Brazil, Kerala, India, and Nayarit, Mexico embody the integration and interlinkage of economic, social, and environmental sustainability. Curitiba has become a more livable city by building an efficient intra-urban bus system, expanding urban green space, and meeting the basic needs of the urban poor. Kerala has attained social harmony by emphasizing equitable resource distribution rather than consumption, by restraining reproduction, and by attacking divisions of race, caste, religion, and gender. Nayarit has sought to balance development with the environment by framing a nature-friendly development plan that protects natural systems from urban development and that involves the public in the development process. A detailed examination of these alternative cultural development models reveals a myriad of possible means by which economic, social, and environmental sustainability might be advanced in practice. The author concludes that while these examples from the developing world cannot be directly translated to cities in the developed world, they do indicate in a general sense the imaginative policies that any society must foster if it is to achieve ‘urban sustainability’.", "title": "" }, { "docid": "cbf878cd5fbf898bdf88a2fcf5024826", "text": "Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.", "title": "" }, { "docid": "feb34f36aed8e030f93c0adfbe49ee8b", "text": "Complex queries containing outer joins are, for the most part, executed by commercial DBMS products in an \"as written\" manner. Only a very few reorderings of the operations are considered and the benefits of considering comprehensive reordering schemes are not exploited. This is largely due to the fact there are no readily usable results for reordering such operations for relations with duplicates and/or outer join predicates that are other than \"simple.\" Most previous approaches have ignored duplicates and complex predicates; the very few that have considered these aspects have suggested approaches that lead to a possibly exponential number of, and redundant intermediate joins. Since traditional query graph models are inadequate for modeling outer join queries with complex predicates, we present the needed hypergraph abstraction and algorithms for reordering such queries with joins and outer joins. As a result, the query optimizer can explore a significantly larger space of execution plans, and choose one with a low cost. Further, these algorithms are easily incorporated into well known and widely used enumeration methods such as dynamic programming.", "title": "" }, { "docid": "01288eefbf2bc0e8c9dc4b6e0c6d70e9", "text": "The latest discoveries on diseases and their diagnosis/treatment are mostly disseminated in the form of scientific publications. However, with the rapid growth of the biomedical literature and a high level of variation and ambiguity in disease names, the task of retrieving disease-related articles becomes increasingly challenging using the traditional keywordbased approach. An important first step for any disease-related information extraction task in the biomedical literature is the disease mention recognition task. However, despite the strong interest, there has not been enough work done on disease name identification, perhaps because of the difficulty in obtaining adequate corpora. Towards this aim, we created a large-scale disease corpus consisting of 6900 disease mentions in 793 PubMed citations, derived from an earlier corpus. Our corpus contains rich annotations, was developed by a team of 12 annotators (two people per annotation) and covers all sentences in a PubMed abstract. Disease mentions are categorized into Specific Disease, Disease Class, Composite Mention and Modifier categories. When used as the gold standard data for a state-of-the-art machine-learning approach, significantly higher performance can be found on our corpus than the previous one. Such characteristics make this disease name corpus a valuable resource for mining disease-related information from biomedical text. The NCBI corpus is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Fe llows/Dogan/disease.html.", "title": "" }, { "docid": "bc3f64571ac833049e95994c675df26a", "text": "Effective Poisson–Nernst–Planck (PNP) equations are derived for ion transport in charged porous media under forced convection (periodic flow in the frame of the mean velocity) by an asymptotic multiscale expansion with drift. The homogenized equations provide a modeling framework for engineering while also addressing fundamental questions about electrodiffusion in charged porous media, relating to electroneutrality, tortuosity, ambipolar diffusion, Einstein’s relation, and hydrodynamic dispersion. The microscopic setting is a two-component periodic composite consisting of a dilute electrolyte continuum (described by standard PNP equations) and a continuous dielectric matrix, which is impermeable to the ions and carries a given surface charge. As a first approximation for forced convection, the electrostatic body force on the fluid and electro-osmotic flows are neglected. Four new features arise in the upscaled equations: (i) the effective ionic diffusivities and mobilities become tensors, related to the microstructure; (ii) the effective permittivity is also a tensor, depending on the electrolyte/matrix permittivity ratio and the ratio of the Debye screening length to the macroscopic length of the porous medium; (iii) the microscopic convection leads to a diffusion-dispersion correction in the effective diffusion tensor; and (iv) the surface charge per volume appears as a continuous “background charge density,” as in classical membrane models. The coefficient tensors in the upscaled PNP equations can be calculated from periodic reference cell problems. For an insulating solid matrix, all gradients are corrected by the same tensor, and the Einstein relation holds at the macroscopic scale, which is not generally the case for a polarizable matrix, unless the permittivity and electric field are suitably defined. In the limit of thin double layers, Poisson’s equation is replaced by macroscopic electroneutrality (balancing ionic and surface charges). The general form of the macroscopic PNP equations may also hold for concentrated solution theories, based on the local-density and mean-field approximations. These results have broad applicability to ion transport in porous electrodes, separators, membranes, ion-exchange resins, soils, porous rocks, and biological tissues.", "title": "" }, { "docid": "e0580a51b7991f86559a7a3aa8b26204", "text": "A new ultra-wideband monocycle pulse generator with good performance is designed and demonstrated. The pulse generator circuits employ SRD(step recovery diode), Schottky diode, and simple RC coupling and decoupling circuit, and are completely fabricated on the planar microstrip structure, which have the characteristic of low cost and small size. Through SRD modeling, the accuracy of the simulation is improved, which save the design period greatly. The generated monocycle pulse has the peak-to-peak amplitude 1.3V, pulse width 370ps and pulse repetition rate of 10MHz, whose waveform features are symmetric well and low ringing level. Good agreement between the measured and calculated results is achieved.", "title": "" }, { "docid": "1a5ddde73f38ab9b2563540c36c222c0", "text": "This paper presents a self-adaptive autonomous online learning through a general type-2 fuzzy system (GT2 FS) for the motor imagery (MI) decoding of a brain-machine interface (BMI) and navigation of a bipedal humanoid robot in a real experiment, using electroencephalography (EEG) brain recordings only. GT2 FSs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) the maximum number of EEG channels is limited and fixed; 2) no possibility of performing repeated user training sessions; and 3) desirable use of unsupervised and low-complexity feature extraction methods. The novel online learning method presented in this paper consists of a self-adaptive GT2 FS that can autonomously self-adapt both its parameters and structure via creation, fusion, and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structure identification is based on an online GT2 Gath–Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models), which are learnt in a continous (trial-by-trial) non-iterative basis. The effectiveness of the proposed method is demonstrated in a detailed BMI experiment, in which 15 untrained users were able to accurately interface with a humanoid robot, in a single session, using signals from six EEG electrodes only.", "title": "" }, { "docid": "914c985dc02edd09f0ee27b75ecee6a4", "text": "Whether the development of face recognition abilities truly reflects changes in how faces, specifically, are perceived, or rather can be attributed to more general perceptual or cognitive development, is debated. Event-related potential (ERP) recordings on the scalp offer promise for this issue because they allow brain responses to complex visual stimuli to be relatively well isolated from other sensory, cognitive and motor processes. ERP studies in 5- to 16-year-old children report large age-related changes in amplitude, latency (decreases) and topographical distribution of the early visual components, the P1 and the occipito-temporal N170. To test the face specificity of these effects, we recorded high-density ERPs to pictures of faces, cars, and their phase-scrambled versions from 72 children between the ages of 4 and 17, and a group of adults. We found that none of the previously reported age-dependent changes in amplitude, latency or topography of the P1 or N170 were specific to faces. Most importantly, when we controlled for age-related variations of the P1, the N170 appeared remarkably similar in amplitude and topography across development, with much smaller age-related decreases in latencies than previously reported. At all ages the N170 showed equivalent face-sensitivity: it had the same topography and right hemisphere dominance, it was absent for meaningless (scrambled) stimuli, and larger and earlier for faces than cars. The data also illustrate the large amount of inter-individual and inter-trial variance in young children's data, which causes the N170 to merge with a later component, the N250, in grand-averaged data. Based on our observations, we suggest that the previously reported \"bi-fid\" N170 of young children is in fact the N250. Overall, our data indicate that the electrophysiological markers of face-sensitive perceptual processes are present from 4 years of age and do not appear to change throughout development.", "title": "" }, { "docid": "d0aa53919bbb869a2c033247e413fc72", "text": "We describe and present a new Question Answering (QA) component that can be easily used by the QA research community. It can be used to answer questions over DBpedia and Wikidata. The language support over DBpedia is restricted to English, while it can be used to answer questions in 4 different languages over Wikidata namely English, French, German and Italian. Moreover it supports both full natural language queries as well as keyword queries. We describe the interfaces to access and reuse it and the services it can be combined with. Moreover we show the evaluation results we achieved on the QALD-7 benchmark.", "title": "" }, { "docid": "df97ff54b80a096670c7771de1f49b6d", "text": "In recent times, Bitcoin has gained special attention both from industry and academia. The underlying technology that enables Bitcoin (or more generally crypto-currency) is called blockchain. At the core of the blockchain technology is a data structure that keeps record of the transactions in the network. The special feature that distinguishes it from existing technology is its immutability of the stored records. To achieve immutability, it uses consensus and cryptographic mechanisms. As the data is stored in distributed nodes this technology is also termed as \"Distributed Ledger Technology (DLT)\". As many researchers and practitioners are joining the hype of blockchain, some of them are raising the question about the fundamental difference between blockchain and traditional database and its real value or potential. In this paper, we present a critical analysis of both technologies based on a survey of the research literature where blockchain solutions are applied to various scenarios. Based on this analysis, we further develop a decision tree diagram that will help both practitioners and researchers to choose the appropriate technology for their use cases. Using our proposed decision tree we evaluate a sample of the existing works to see to what extent the blockchain solutions have been used appropriately in the relevant problem domains.", "title": "" }, { "docid": "fcf410fc492f3ddf80be9cb5351f7aed", "text": "Unmanned Combat Aerial Vehicle (UCAV) research has allowed the state of the art of the remote-operation of these technologies to advance significantly in modern times, though mostly focusing on ground strike scenarios. Within the context of air-to-air combat, millisecond long timeframes for critical decisions inhibit remoteoperation of UCAVs. Beyond this, given an average human visual reaction time of 0.15 to 0.30 seconds, and an even longer time to think of optimal plans and coordinate them with friendly forces, there is a huge window of improvement that an Artificial Intelligence (AI) can capitalize upon. While many proponents for an increase in autonomous capabilities herald the ability to design aircraft that can perform extremely high-g maneuvers as well as the benefit of reducing risk to our pilots, this white paper will primarily focus on the increase in capabilities of real-time decision making.", "title": "" }, { "docid": "f87a4ddb602d9218a0175a9e804c87c6", "text": "We present a novel online audio-score alignment approach for multi-instrument polyphonic music. This approach uses a 2-dimensional state vector to model the underlying score position and tempo of each time frame of the audio performance. The process model is defined by dynamic equations to transition between states. Two representations of the observed audio frame are proposed, resulting in two observation models: a multi-pitch-based and a chroma-based. Particle filtering is used to infer the hidden states from observations. Experiments on 150 music pieces with polyphony from one to four show the proposed approach outperforms an existing offline global string alignment-based score alignment approach. Results also show that the multi-pitch-based observation model works better than the chroma-based one.", "title": "" }, { "docid": "abbb210122d470215c5a1d0420d9db06", "text": "Ensemble clustering, also known as consensus clustering, is emerging as a promising solution for multi-source and/or heterogeneous data clustering. The co-association matrix based method, which redefines the ensemble clustering problem as a classical graph partition problem, is a landmark method in this area. Nevertheless, the relatively high time and space complexity preclude it from real-life large-scale data clustering. We therefore propose SEC, an efficient Spectral Ensemble Clustering method based on co-association matrix. We show that SEC has theoretical equivalence to weighted K-means clustering and results in vastly reduced algorithmic complexity. We then derive the latent consensus function of SEC, which to our best knowledge is among the first to bridge co-association matrix based method to the methods with explicit object functions. The robustness and generalizability of SEC are then investigated to prove the superiority of SEC in theory. We finally extend SEC to meet the challenge rising from incomplete basic partitions, based on which a scheme for big data clustering can be formed. Experimental results on various real-world data sets demonstrate that SEC is an effective and efficient competitor to some state-of-the-art ensemble clustering methods and is also suitable for big data clustering.", "title": "" }, { "docid": "45cff09810b8741d8be1010aa6ff3000", "text": "This paper discusses experience in applying time harmonic three-dimensional (3D) finite element (FE) analysis in analyzing an axial-flux (AF) solid-rotor induction motor (IM). The motor is a single rotor - single stator AF IM. The construction presented in this paper has not been analyzed before in any technical documents. The field analysis and the comparison of torque calculation results of the 3D calculations with measured torque results are presented", "title": "" }, { "docid": "2c1689a9a6d257f9e2ce8f33a1e30cb9", "text": "This study examined the use of neural word embeddings for clinical abbreviation disambiguation, a special case of word sense disambiguation (WSD). We investigated three different methods for deriving word embeddings from a large unlabeled clinical corpus: one existing method called Surrounding based embedding feature (SBE), and two newly developed methods: Left-Right surrounding based embedding feature (LR_SBE) and MAX surrounding based embedding feature (MAX_SBE). We then added these word embeddings as additional features to a Support Vector Machines (SVM) based WSD system. Evaluation using the clinical abbreviation datasets from both the Vanderbilt University and the University of Minnesota showed that neural word embedding features improved the performance of the SVMbased clinical abbreviation disambiguation system. More specifically, the new MAX_SBE method outperformed the other two methods and achieved the state-of-the-art performance on both clinical abbreviation datasets.", "title": "" }, { "docid": "f51583c6eb5a0d6e27823e0714d40ef5", "text": "Studies of emotion regulation typically contrast two or more strategies (e.g., reappraisal vs. suppression) and ignore variation within each strategy. To address such variation, we focused on cognitive reappraisal and considered the effects of goals (i.e., what people are trying to achieve) and tactics (i.e., what people actually do) on outcomes (i.e., how affective responses change). To examine goals, we randomly assigned participants to either increase positive emotion or decrease negative emotion to a negative stimulus. To examine tactics, we categorized participants' reports of how they reappraised. To examine reappraisal outcomes, we measured experience and electrodermal responding. Findings indicated that (a) the goal of increasing positive emotion led to greater increases in positive affect and smaller decreases in skin conductance than the goal of decreasing negative emotion, and (b) use of the reality challenge tactic was associated with smaller increases in positive affect during reappraisal. These findings suggest that reappraisal can be implemented in the service of different emotion goals, using different tactics. Such differences are associated with different outcomes, and they should be considered in future research and applied attempts to maximize reappraisal success.", "title": "" } ]
scidocsrr
d4793757d335d0616fc789e26cd2ac32
A0C: Alpha Zero in Continuous Action Space
[ { "docid": "45940a48b86645041726120fb066a1fa", "text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.", "title": "" }, { "docid": "d4a0b5558045245a55efbf9b71a84bc3", "text": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.", "title": "" }, { "docid": "9ec7b122117acf691f3bee6105deeb81", "text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.", "title": "" } ]
[ { "docid": "6241da02b35863e8aa0ea08292340de5", "text": "PmSVM (Power Mean SVM), a classifier that trains significantly faster than state-of-the-art linear and non-linear SVM solvers in large scale visual classification tasks, is presented. PmSVM also achieves higher accuracies. A scalable learning method for large vision problems, e.g., with millions of examples or dimensions, is a key component in many current vision systems. Recent progresses have enabled linear classifiers to efficiently process such large scale problems. Linear classifiers, however, usually have inferior accuracies in vision tasks. Non-linear classifiers, on the other hand, may take weeks or even years to train. We propose a power mean kernel and present an efficient learning algorithm through gradient approximation. The power mean kernel family include as special cases many popular additive kernels. Empirically, PmSVM is up to 5 times faster than LIBLINEAR, and two times faster than state-of-the-art additive kernel classifiers. In terms of accuracy, it outperforms state-of-the-art additive kernel implementations, and has major advantages over linear SVM.", "title": "" }, { "docid": "37936de50a1d3fa8612a465b6644c282", "text": "Nature uses a limited, conservative set of amino acids to synthesize proteins. The ability to genetically encode an expanded set of building blocks with new chemical and physical properties is transforming the study, manipulation and evolution of proteins, and is enabling diverse applications, including approaches to probe, image and control protein function, and to precisely engineer therapeutics. Underpinning this transformation are strategies to engineer and rewire translation. Emerging strategies aim to reprogram the genetic code so that noncanonical biopolymers can be synthesized and evolved, and to test the limits of our ability to engineer the translational machinery and systematically recode genomes.", "title": "" }, { "docid": "c02fb121399e1ed82458fb62179d2560", "text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.", "title": "" }, { "docid": "aec1d7de7ddd0c9991c05611c20450e4", "text": "A set of circles, rectangles, and convex polygons are to be cut from rectangular design plates to be produced, or from a set of stocked rectangles of known geometric dimensions. The objective is to minimize the area of the design rectangles. The design plates are subject to lower and upper bounds of their widths and lengths. The objects are free of any orientation restrictions. If all nested objects fit into one design or stocked plate the problem is formulated and solved as a nonconvex nonlinear programming problem. If the number of objects cannot be cut from a single plate, additional integer variables are needed to represent the allocation problem leading to a nonconvex mixed integer nonlinear optimization problem. This is the first time that circles and arbitrary convex polygons are treated simultaneously in this context. We present exact mathematical programming solutions to both the design and allocation problem. For small number of objects to be cut we compute globally optimal solutions. One key idea in the developed NLP and MINLP models is to use separating hyperplanes to ensure that rectangles and polygons do not overlap with each other or with the circles. Another important idea used when dealing with several resource rectangles is to develop a model formulation which connects the binary variables only to the variables representing the center of the circles or the vertices of the polytopes but not to the nonoverlap or shape constraints. We support the solution process by symmetry breaking constraints. In addition we compute lower bounds, which are constructed by a relaxed model in which each polygon is replaced by the largest circle fitting into that polygon. We have successfully applied several solution techniques to solve this problem among them the Branch&Reduce Optimization Navigator (BARON) and the LindoGlobal solver called from GAMS, and, as described in Rebennack et al. (2008, [21]), a column enumeration approach in which the columns represent the assignments. Good feasible solutions are computed within seconds or minutes usually during preprocessing. In most cases they turn out to be globally optimal. For up to 10 circles, we prove global optimality up to a gap of the order of 10 in short time. Cases with a modest number of objects, for instance, 6 circles and 3 rectangles, are also solved in short time to global optimality. For test instances involving non-rectangular polygons it is difficult to obtain small gaps. In such cases we are content to obtain gaps of the order of 10 percent.", "title": "" }, { "docid": "2bda1b1482ca7b74078b10654576b24d", "text": "A pattern recognition pipeline consists of three stages: data pre-processing, feature extraction, and classification. Traditionally, most research effort is put into extracting appropriate features. With the advent of GPU-accelerated computing and Deep Learning, appropriate features can be discovered as part of the training process. Understanding these discovered features is important: we might be able to learn something new about the domain in which our model operates, or be comforted by the fact that the model extracts “sensible” features. This work discusses and applies methods of visualizing the features learned by Convolutional Neural Networks (CNNs). Our main contribution is an extension of an existing visualization method. The extension makes the method able to visualize the features in intermediate layers of a CNN. Most notably, we show that the features extracted in the deeper layers of a CNN trained to diagnose Diabetic Retinopathy are also the features used by human clinicians. Additionally, we published our visualization method in a software package.", "title": "" }, { "docid": "d7793313ab21020e79e41817b8372ee8", "text": "We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87% of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35% relative error reduction over previous state of the art.", "title": "" }, { "docid": "443a4fe9e7484a18aa53a4b142d93956", "text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.", "title": "" }, { "docid": "fe06ac2458e00c5447a255486189f1d1", "text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.", "title": "" }, { "docid": "5814f71c0fbbd1721f6c3ad948895c62", "text": "Technological innovations made it possible to create more and more realistic figures. Such figures are often created according to human appearance and behavior allowing interaction with artificial systems in a natural and familiar way. In 1970, the Japanese roboticist Masahiro Mori observed, however, that robots and prostheses with a very – but not perfect – human-like appearance can elicit eerie, uncomfortable, and even repulsive feelings. While real people or stylized figures do not seem to evoke such negative feelings, human depictions with only minor imperfections fall into the “uncanny valley,” as Mori put it. Today, further innovations in computer graphics led virtual characters into the uncanny valley. Thus, they have been subject of a number of disciplines. For research, virtual characters created by computer graphics are particularly interesting as they are easy to manipulate and, thus, can significantly contribute to a better understanding of the uncanny valley and human perception. For designers and developers of virtual characters such as in animated movies or games, it is important to understand how the appearance and human-likeness or virtual realism influence the experience and interaction of the user and how they can create believable and acceptable avatars and virtual characters despite the uncanny valley. This work investigates these aspects and is the next step in the exploration of the uncanny valley.", "title": "" }, { "docid": "baf8d2176f8c9058967fb3636022cd72", "text": "The ability to provide assistance for a student at the appropriate level is invaluable in the learning process. Not only does it aids the student's learning process but also prevents problems, such as student frustration and floundering. Students' key demographic characteristics and their marks in a small number of written assignments can constitute the training set for a regression method in order to predict the student's performance. The scope of this work compares some of the state of the art regression algorithms in the application domain of predicting students' marks. A number of experiments have been conducted with six algorithms, which were trained using datasets provided by the Hellenic Open University. Finally, a prototype version of software support tool for tutors has been constructed implementing the M5rules algorithm, which proved to be the most appropriate among the tested algorithms.", "title": "" }, { "docid": "88302ac0c35e991b9db407f268fdb064", "text": "We propose a novel memory architecture for in-memory computation called McDRAM, where DRAM dies are equipped with a large number of multiply accumulate (MAC) units to perform matrix computation for neural networks. By exploiting high internal memory bandwidth and reducing off-chip memory accesses, McDRAM realizes both low latency and energy efficient computation. In our experiments, we obtained the chip layout based on the state-of-the-art memory, LPDDR4 where McDRAM is equipped with 2048 MACs in a single chip package with a small area overhead (4.7%). Compared with the state-of-the-art accelerator, TPU and the power-efficient GPU, Nvidia P4, McDRAM offers <inline-formula> <tex-math notation=\"LaTeX\">$9.5{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$14.4{\\times }$ </tex-math></inline-formula> speedup, respectively, in the case that the large-scale MLPs and RNNs adopt the batch size of 1. McDRAM also gives <inline-formula> <tex-math notation=\"LaTeX\">$2.1{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$3.7{\\times }$ </tex-math></inline-formula> better computational efficiency in TOPS/W than TPU and P4, respectively, for the large batches.", "title": "" }, { "docid": "5a9d0e5046129bbdad435980f125db37", "text": "The impact of channel width scaling on low-frequency noise (LFN) and high-frequency performance in multifinger MOSFETs is reported in this paper. The compressive stress from shallow trench isolation (STI) cannot explain the lower LFN in extremely narrow devices. STI top corner rounding (TCR)-induced Δ<i>W</i> is identified as an important factor that is responsible for the increase in transconductance <i>Gm</i> and the reduction in LFN with width scaling to nanoscale regime. A semi-empirical model was derived to simulate the effective mobility (μ<sub>eff</sub>) degradation from STI stress and the increase in effective width (<i>W</i><sub>eff</sub>) from Δ<i>W</i> due to STI TCR. The proposed model can accurately predict width scaling effect on <i>Gm</i> based on a tradeoff between μ<sub>eff</sub> and <i>W</i><sub>eff</sub>. The enhanced STI stress may lead to an increase in interface traps density (<i>N</i><sub>it</sub>), but the influence is relatively minor and can be compensated by the <i>W</i><sub>eff</sub> effect. Unfortunately, the extremely narrow devices suffer <i>fT</i> degradation due to an increase in <i>C</i><sub>gg</sub>. The investigation of impact from width scaling on μ<sub>eff</sub>, <i>Gm</i>, and LFN, as well as the tradeoff between LFN and high-frequency performance, provides an important layout guideline for analog and RF circuit design.", "title": "" }, { "docid": "8a478da1c2091525762db35f1ac7af58", "text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.", "title": "" }, { "docid": "88fb71e503e0d0af7515dd8489061e25", "text": "The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970smade Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "404a32f89d6273a63b7ae945514655d2", "text": "Miniaturized minimally-invasive implants with wireless power and communication links have the potential to enable closed-loop treatments and precise diagnostics. As with wireless power transfer, robust wireless communication between implants and external transceivers presents challenges and tradeoffs with miniaturization and increasing depth. Both link efficiency and available bandwidth need to be considered for communication capacity. This paper analyzes and reviews active electromagnetic and ultrasonic communication links for implants. Example transmitter designs are presented for both types of links. Electromagnetic links for mm-sized implants have demonstrated high data rates sufficient for most applications up to Mbps range; nonetheless, they have so far been limited to depths under 5 cm. Ultrasonic links, on the other hand, have shown much deeper transmission depths, but with limited data rate due to their low operating frequency. Spatial multiplexing techniques are proposed to increase ultrasonic data rates without additional power or bandwidth.", "title": "" }, { "docid": "a57aa7ff68f7259a9d9d4d969e603dcd", "text": "Society has changed drastically over the last few years. But this is nothing new, or so it appears. Societies are always changing, just as people are always changing. And seeing as it is the people who form the societies, a constantly changing society is only natural. However something more seems to have happened over the last few years. Without wanting to frighten off the reader straight away, we can point to a diversity of social developments that indicate that the changes seem to be following each other faster, especially over the last few decades. We can for instance, point to the pluralisation (or a growing versatility), differentialisation and specialisation of society as a whole. On a more personal note, we see the diversification of communities, an emphasis on emancipation, individualisation and post-materialism and an increasing wish to live one's life as one wishes, free from social, religious or ideological contexts.", "title": "" }, { "docid": "423c37020f097cf42635b0936709c7fe", "text": "Two major goals in machine learning are the discovery of comp lex multidimensional solutions and continual improvement of existing solutions. In this paper, we argue thatcomplexification, i.e. the incremental elaboration of solutions through adding new structure, ach ieves both these goals. We demonstrate the power of complexification through the NeuroEvolution of Augmenti ng Topologies (NEAT) method, which evolves increasingly complex neural network architectures. NEAT i s applied to an open-ended coevolutionary robot duel domain where robot controllers compete head to head. Be caus the robot duel domain supports a wide range of sophisticated strategies, and because coevolutio n benefits from an escalating arms race, it serves as a suitable testbed for observing the effect of evolving in creasingly complex controllers. The result is an arms race of increasingly sophisticated strategies. When c ompared to the evolution of networks with fixed structure, complexifying networks discover significantly more sophisticated strategies. The results suggest that in order to realize the full potential of evolution, and search in general, solutions must be allowed to complexify as well as optimize.", "title": "" }, { "docid": "6c92652aa5bab1b25910d16cca697d48", "text": "Intrusion detection has attracted a considerable interest from researchers and industries. The community, after many years of research, still faces the problem of building reliable and efficient IDS that are capable of handling large quantities of data, with changing patterns in real time situations. The work presented in this manuscript classifies intrusion detection systems (IDS). Moreover, a taxonomy and survey of shallow and deep networks intrusion detection systems is presented based on previous and current works. This taxonomy and survey reviews machine learning techniques and their performance in detecting anomalies. Feature selection which influences the effectiveness of machine learning (ML) IDS is discussed to explain the role of feature selection in the classification and training phase of ML IDS. Finally, a discussion of the false and true positive alarm rates is presented to help researchers model reliable and efficient machine learning based intrusion detection systems. Keywords— Shallow network, Deep networks, Intrusion detection, False positive alarm rates and True positive alarm rates 1.0 INTRODUCTION Computer networks have developed rapidly over the years contributing significantly to social and economic development. International trade, healthcare systems and military capabilities are examples of human activity that increasingly rely on networks. This has led to an increasing interest in the security of networks by industry and researchers. The importance of Intrusion Detection Systems (IDS) is critical as networks can become vulnerable to attacks from both internal and external intruders [1], [2]. An IDS is a detection system put in place to monitor computer networks. These have been in use since the 1980’s [3]. By analysing patterns of captured data from a network, IDS help to detect threats [4]. These threats can be devastating, for example, Denial of service (DoS) denies or prevents legitimate users resource on a network by introducing unwanted traffic [5]. Malware is another example, where attackers use malicious software to disrupt systems [6].", "title": "" } ]
scidocsrr
7dd591b32159f4be0c666e32796642aa
GamePad: A Learning Environment for Theorem Proving
[ { "docid": "cc7033023e1c5a902dfa10c8346565c4", "text": "Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays, and uninterpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in various software verification and analysis applications.", "title": "" }, { "docid": "cd8c1c24d4996217c8927be18c48488f", "text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.", "title": "" }, { "docid": "4381ee2e578a640dda05e609ed7f6d53", "text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.", "title": "" } ]
[ { "docid": "5a4315e5887bdbb6562e76b54d03beeb", "text": "A combination of conventional cross sectional process and device simulations combined with top down and 3D device simulations have been used to design and optimise the integration of a 100V Lateral DMOS (LDMOS) device for high side bridge applications. This combined simulation approach can streamline the device design process and gain important information about end effects which are lost from 2D cross sectional simulations. Design solutions to negate detrimental end effects are proposed and optimised by top down and 3D simulations and subsequently proven on tested silicon.", "title": "" }, { "docid": "47f9724fd9dc25eda991854074ac0afa", "text": "This paper reviews the state of the art in piezoelectric energy harvesting. It presents the basics of piezoelectricity and discusses materials choice. The work places emphasis on material operating modes and device configurations, from resonant to non-resonant devices and also to rotational solutions. The reviewed literature is compared based on power density and bandwidth. Lastly, the question of power conversion is addressed by reviewing various circuit solutions.", "title": "" }, { "docid": "b6715e3ee8b2876b479522c03c1d674a", "text": "Normalizing for atmospheric and land surface bidirectional reflectance distribution function (BRDF) effects is essential in satellite data processing. It is important both for a single scene when the combination of land covers, sun, and view angles create anisotropy and for multiple scenes in which the sun angle changes. As a consequence, it is important for inter-sensor calibration and comparison. Procedures based on physics-based models have been applied successfully with the Moderate Resolution Imaging Spectroradiometer (MODIS) data. For Landsat and other higher resolution data, similar options exist. However, the estimation of BRDF models using internal fitting is not available due to the smaller variation of view and solar angles and infrequent revisits. In this paper, we explore the potential for developing operational procedures to correct Landsat data using coupled physics-based atmospheric and BRDF models. The process was realized using BRDF shape functions derived from MODIS with the MODTRAN 4 radiative transfer model. The atmospheric and BRDF correction algorithm was tested for reflectance factor estimation using Landsat data for two sites with different land covers in Australia. The Landsat reflectance values had a good agreement with ground based spectroradiometer measurements. In addition, overlapping images from adjacent paths in Queensland, Australia, were also used to validate the BRDF correction. The results clearly show that the algorithm can remove most of the BRDF effect without empirical adjustment. The comparison between normalized Landsat and MODIS reflectance factor also shows a good relationship, indicating that cross calibration between the two sensors is achievable.", "title": "" }, { "docid": "78b6d4935256010742bc67491935374d", "text": "Technology has enabled us to imagine beyond our working capacities and think of solutions that can replace the monotonous work with automated machines and systems. This research paper is aimed at making the parking system agile, robust and more convenient for people. Albeit, several parking solutions are available, this system integrates all problems into one single idea that can be permanently embedded as a solution. The system will incorporate different modules like parking availability calculation, proximity estimation and payment service. The system will also guide the vehicle owners to navigate through the parking lot. Moreover, an analysis will be conducted to examine the benefits of the current project and how it can be improved.", "title": "" }, { "docid": "71aae4cbccf6d3451d35528ceca8b8a9", "text": "We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.", "title": "" }, { "docid": "08dbd88adb399721e0f5ee91534c9888", "text": "Many theories of attention have proposed that visual working memory plays an important role in visual search tasks. The present study examined the involvement of visual working memory in search using a dual-task paradigm in which participants performed a visual search task while maintaining no, two, or four objects in visual working memory. The presence of a working memory load added a constant delay to the visual search reaction times, irrespective of the number of items in the visual search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the search array, indicating that the search process itself was not slowed by the memory load. Moreover, the search task did not substantially impair the maintenance of information in visual working memory. These results suggest that visual search requires minimal visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.", "title": "" }, { "docid": "ab56aa5fc6fe6557c2be28056cfb660e", "text": "Autophagy is an evolutionarily ancient mechanism that ensures the lysosomal degradation of old, supernumerary or ectopic cytoplasmic entities. Most eukaryotic cells, including neurons, rely on proficient autophagic responses for the maintenance of homeostasis in response to stress. Accordingly, autophagy mediates neuroprotective effects following some forms of acute brain damage, including methamphetamine intoxication, spinal cord injury and subarachnoid haemorrhage. In some other circumstances, however, the autophagic machinery precipitates a peculiar form of cell death (known as autosis) that contributes to the aetiology of other types of acute brain damage, such as neonatal asphyxia. Here, we dissect the context-specific impact of autophagy on non-infectious acute brain injury, emphasizing the possible therapeutic application of pharmacological activators and inhibitors of this catabolic process for neuroprotection.", "title": "" }, { "docid": "40099678d2c97013eb986d3be93eefb4", "text": "Mortality prediction of intensive care unit (ICU) patients facilitates hospital benchmarking and has the opportunity to provide caregivers with useful summaries of patient health at the bedside. The development of novel models for mortality prediction is a popular task in machine learning, with researchers typically seeking to maximize measures such as the area under the receiver operator characteristic curve (AUROC). The number of ’researcher degrees of freedom’ that contribute to the performance of a model, however, presents a challenge when seeking to compare reported performance of such models. In this study, we review publications that have reported performance of mortality prediction models based on the Medical Information Mart for Intensive Care (MIMIC) database and attempt to reproduce the cohorts used in their studies. We then compare the performance reported in the studies against gradient boosting and logistic regression models using a simple set of features extracted from MIMIC. We demonstrate the large heterogeneity in studies that purport to conduct the single task of ’mortality prediction’, highlighting the need for improvements in the way that prediction tasks are reported to enable fairer comparison between models. We reproduced datasets for 38 experiments corresponding to 28 published studies using MIMIC. In half of the experiments, the sample size we acquired was 25% greater or smaller than the sample size reported. The highest discrepancy was 11,767 patients. While accurate reproduction of each study cannot be guaranteed, we believe that these results highlight the need for more consistent reporting of model design and methodology to allow performance improvements to be compared. We discuss the challenges in reproducing the cohorts used in the studies, highlighting the importance of clearly reported methods (e.g. data cleansing, variable selection, cohort selection) and the need for open code and publicly available benchmarks.", "title": "" }, { "docid": "e72872277a33dcf6d5c1f7e31f68a632", "text": "Tilt rotor unmanned aerial vehicle (TRUAV) with ability of hovering and high-speed cruise has attached much attention, but its transition control is still a difficult point because of varying dynamics. This paper proposes a multi-model adaptive control (MMAC) method for a quad-TRUAV, and the stability in the transition procedure could be ensured by considering corresponding dynamics. For safe transition, tilt corridor is considered firstly, and actual flight status should locate within it. Then, the MMAC controller is constructed according to mode probabilities, which are calculated by solving a quadratic programming problem based on a set of input- output plant models. Compared with typical gain scheduling control, this method could ensure transition stability more effectively.", "title": "" }, { "docid": "680be905a0f01e26e608ba7b4b79a94e", "text": "A cost-effective position measurement system based on optical mouse sensors is presented in this work. The system is intended to be used in a planar positioning stage for microscopy applications and as such, has strict resolution, accuracy, repeatability, and sensitivity requirements. Three techniques which improve the measurement system's performance in the context of these requirements are proposed; namely, an optical magnification of the image projected onto the mouse sensor, a periodic homing procedure to reset the error buildup, and a compensation of the undesired dynamics caused by filters implemented in the mouse sensor chip.", "title": "" }, { "docid": "99c25c7e8dfbdffb5949fc00730cbe15", "text": "The vegetation outlook (VegOut) is a geospatial tool for predicting general vegetation condition patterns across large areas. VegOut predicts a standardized seasonal greenness (SSG) measure, which represents a general indicator of relative vegetation health. VegOut predicts SSG values at multiple time steps (two to six weeks into the future) based on the analysis of “historical patterns” (i.e., patterns at each 1 km grid cell and time of the year) of satellite, climate, and oceanic data over an 18-year period (1989 to 2006). The model underlying VegOut capitalizes on historical climate–vegetation interactions and ocean–climate teleconnections (such as El Niño and the Southern Oscillation, ENSO) expressed over the 18-year data record and also considers several environmental characteristics (e.g., land use/cover type and soils) that influence vegetation’s response to weather conditions to produce 1 km maps that depict future general vegetation conditions. VegOut provides regionallevel vegetation monitoring capabilities with local-scale information (e.g., county to sub-county level) that can complement more traditional remote sensing–based approaches that monitor “current” vegetation conditions. In this paper, the VegOut approach is discussed and a case study over the central United States for selected periods of the 2008 growing season is presented to demonstrate the potential of this new tool for assessing and predicting vegetation conditions.", "title": "" }, { "docid": "30260d1a4a936c79e6911e1e91c3a84a", "text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.", "title": "" }, { "docid": "aec23c23dfb209513fe804a2558cd087", "text": "In recent years, STT-RAMs have been proposed as a promising replacement for SRAMs in on-chip caches. Although STT-RAMs benefit from high-density, non-volatility, and low-power characteristics, high rates of read disturbances and write failures are the major reliability problems in STTRAM caches. These disturbance/failure rates are directly affected not only by workload behaviors, but also by process variations. Several studies characterized the reliability of STTRAM caches just for one cell, but vulnerability of STT-RAM caches cannot be directly derived from these models. This paper extrapolates the reliability characteristics of one STTRAM cell presented in previous studies to the vulnerability analysis of STT-RAM caches. To this end, we propose a highlevel framework to investigate the vulnerability of STT-RAM caches affected by the per-cell disturbance/failure rates as well as the workloads behaviors and process variations. This framework is an augmentation of gem5 simulator. The investigation reveals that: 1) the read disturbance rate in a cache varies by 6 orders of magnitude for different workloads, 2) the write failure rate varies by 4 orders of magnitude for different workloads, and 3) the process variations increase the read disturbance and write failure rates by up to 5.8x and 8.9x, respectively.", "title": "" }, { "docid": "d5a882ecc0c78ee4c8456adb21914af4", "text": "Radiologists routinely examine medical images such as XRay, CT, or MRI and write reports summarizing their descriptive findings and conclusive impressions. A computer-aided radiology report generation system can lighten the workload for radiologists considerably and assist them in decision making. Although the rapid development of deep learning technology makes the generation of a single conclusive sentence possible, results produced by existing methods are not sufficiently reliable due to the complexity of medical images. Furthermore, generating detailed paragraph descriptions for medical images remains a challenging problem. To tackle this problem, we propose a novel generative model which generates a complete radiology report automatically. The proposed model incorporates the Convolutional Neural Networks (CNNs) with the Long Short-Term Memory (LSTM) in a recurrent way. It is capable of not only generating high-level conclusive impressions, but also generating detailed descriptive findings sentence by sentence to support the conclusion. Furthermore, our multimodal model combines the encoding of the image and one generated sentence to construct an attention input to guide the generation of the next sentence, and henceforth maintains coherence among generated sentences. Experimental results on the publicly available Indiana U. Chest X-rays from the Open-i image collection show that our proposed recurrent attention model achieves significant improvements over baseline models according to multiple evaluation metrics.", "title": "" }, { "docid": "bd7f3decfe769db61f0577a60e39a26f", "text": "Automated food and drink recognition methods connect to cloud-based lookup databases (e.g., food item barcodes, previously identified food images, or previously classified NIR (Near Infrared) spectra of food and drink items databases) to match and identify a scanned food or drink item, and report the results back to the user. However, these methods remain of limited value if we cannot further reason with the identified food and drink items, ingredients and quantities/portion sizes in a proposed meal in various contexts; i.e., understand from a semantic perspective their types, properties, and interrelationships in the context of a given user’s health condition and preferences. In this paper, we review a number of “food ontologies”, such as the Food Products Ontology/FOODpedia (by Kolchin and Zamula), Open Food Facts (by Gigandet et al.), FoodWiki (Ontology-driven Mobile Safe Food Consumption System by Celik), FOODS-Diabetes Edition (A Food-Oriented Ontology-Driven System by Snae Namahoot and Bruckner), and AGROVOC multilingual agricultural thesaurus (by the UN Food and Agriculture Organization—FAO). These food ontologies, with appropriate modifications (or as a basis, to be added to and further OPEN ACCESS Future Internet 2015, 7 373 expanded) and together with other relevant non-food ontologies (e.g., about diet-sensitive disease conditions), can supplement the aforementioned lookup databases to enable progression from the mere automated identification of food and drinks in our meals to a more useful application whereby we can automatically reason with the identified food and drink items and their details (quantities and ingredients/bromatological composition) in order to better assist users in making the correct, healthy food and drink choices for their particular health condition, age, body weight/BMI (Body Mass Index), lifestyle and preferences, etc.", "title": "" }, { "docid": "544333c99f2b28e37702306bfe6521d4", "text": "Faced with unsustainable costs and enormous amounts of under-utilized data, health care needs more efficient practices, research, and tools to harness the full benefits of personal health and healthcare-related data. Imagine visiting your physician’s office with a list of concerns and questions. What if you could walk out the office with a personalized assessment of your health? What if you could have personalized disease management and wellness plan? These are the goals and vision of the work discussed in this paper. The timing is right for such a research direction—given the changes in health care, reimbursement, reform, meaningful use of electronic health care data, and patient-centered outcome mandate. We present the foundations of work that takes a Big Data driven approach towards personalized healthcare, and demonstrate its applicability to patient-centered outcomes, meaningful use, and reducing re-admission rates.", "title": "" }, { "docid": "86617458af24278fa2b69b544dc0f09e", "text": "Recent research on learning in work situations has focussed on concepts such as ‘productive learning’ and ‘pedagogy of vocational learning’. In investigating what makes learning productive and what pedagogies enhance this, there is a tendency to take the notion of learning as unproblematic. This paper argues that much writing on workplace learning is strongly shaped by peoples’ understandings of learning in formal educational situations. Such assumptions distort attempts to understand learning at work. The main focus of this paper is to problematise the concept of ‘learning’ and to identify the implications of this for attempts to understand learning at work and the conditions that enhance it. An alternative conception of learning that promises to do more justice to the richness of learning at work is presented and discussed. For several years now, the adult and vocational learning research group at University of Technology, Sydney, (now known as OVAL Research1), has been pursuing a systematic research agenda centred on issues about learning at work (e.g. Boud & Garrick 1999, Symes & McIntyre 2000, Beckett & Hager 2002). The OVAL research group’s two most recent seminar series have been focussed on ‘productive learning’ and ‘pedagogy of vocational learning’. Both of these topics reflect a concern with conditions that enhance rich learning in work situations. In attempting, however, to characterise what makes learning productive and what pedagogies enhance this, there may be a tendency to take the notion of learning as unproblematic. I have elsewhere argued that common understandings of learning uncritically incorporate assumptions that derive from previous formal learning experiences (Hager forthcoming). Likewise Elkjaer (2003) has recently pointed out how much writing on workplace learning is strongly shaped by the authors’ understandings of learning in formal educational situations. The main focus of this paper is to problematise the concept of ‘learning’ and to identify the implications of this for attempts to understand learning at work and the conditions that enhance it. A key claim is that government policies that impact significantly on learning at work commonly treat learning as a product, i.e. as the acquisition of discrete items of knowledge or skill. The argument is that these policies thereby obstruct attempts to develop satisfactory understandings of learning at work. 1 The Australian Centre for Organisational, Vocational and Adult Learning Research. (For details see www.oval.uts.edu.au) Problematising the Concept of Learning Although learning is still widely treated as an unproblematic concept in educational writings, there is growing evidence that its meaning increasingly is being contested. For instance Brown & Palincsar (1989, p. 394) observed: “Learning is a term with more meanings that there are theorists”. Schoenfeld (1999, p. 6) noted “....that the very definition of learning is contested, and that assumptions that people make regarding its nature and where it takes place also vary widely.” According to Winch “.....the possibility of giving a scientific or even a systematic account of human learning is ..... mistaken” (1998, p. 2). His argument is that there are many and diverse cases of learning, each subject to “constraints in a variety of contexts and cultures” which precludes them from being treated in a general way (1998, p. 85). He concludes that “... grand theories of learning .... are underpinned ... invariably ... by faulty epistemological premises” (Winch, 1998, p. 183). Not only is the concept of learning disputed amongst theorists, it seems that even those with the greatest claims to practical knowledge of learning may be deficient in their understanding. Those bastions of learning, higher education institutions can trace their origins back into the mists of time. If anyone knows from experience what learning is it should be them. Yet the recent cyber learning debacle suggests otherwise. Many of the world’s most illustrious universities have invested many millions of dollars setting up suites of online courses in the expectation of making large profits from offcampus students. According to Brabazon (2002), these initiatives have manifestly failed since prospective students were not prepared to pay the fees. Many of these online courses are now available free as a backup resource for on-campus students. Brabazon’s analysis is that these university ‘experts’ on learning have confused technology with teaching and tools with learning. The staggering sums of money mis-invested in online education certainly shows that universities may not be the experts in learning that they think they are. We can take Brabazon’s analysis a step further. The reason why tools were confused with learning, I argue, is that learning is not a well understood concept at the start of the 21st century. Perhaps it is in a similar position to the concept of motion at the end of the middle ages. Of course, motion is one of the central concepts in physics, just as learning is a central concept in education, and the social sciences generally. For a long time, understanding of motion was limited by adherence to the Aristotelian attempt to provide a single account of all motion. Aristotle proposed a second-order distinction between natural and violent motions. It was the ‘nature’ of all terrestrial bodies to have a natural motion towards the centre of the universe (the centre of the earth); but bodies were also subject to violent motions in any direction imparted by disruptive, external, ‘non-natural’ causes. So the idea was to privilege one kind of motion as basic and to account for others in terms of non-natural disruptions to this natural motion. The Aristotelian account persisted for so long because it was in accord with ‘common sense’ ideas on motion. Everyone was familiar with motion and thought that they understood it. Likewise, everyone has experienced formal schooling and this shapes how they understand learning. Thus, the type of learning that is familiar to everyone gains privileged status. The worth of other kinds of learning is judged by how well they approximate the favoured kind (Beckett & Hager 2002, section 6.1). The dominance of this concept of learning is also evident in educational thought, where there has been a major focus on learning in formal education settings. This dominant view of learning also fits well with ‘folk’ conceptions of the mind (Bereiter 2002). Real progress in understanding motion came when physicists departed from ‘common sense’ ideas and recognised that there are many different types of motion – falling, projectile, pendulum, wave, etc. each requiring their own account. Likewise, it seems there are many types of learning and things that can be learnt – propositions, skills, behaviours, attitudes, etc. Efforts to understand these may well require a range of theories each with somewhat different assumptions. The Monolithic Influence of Viewing Learning as a Product There is currently a dominant view of learning that is akin to the Aristotelian view of motion in its pervasive influence. It provides an account of supposedly the best kind of learning, and all cases of learning are judged by how well they fit this view. This dominant view of learning – the ‘common sense’ account – views the mind as a ‘container’ and ‘knowledge as a type of substance’ (Lakoff & Johnson 1980). Under the influence of the mind-as-container metaphor, knowledge is treated as consisting of objects contained in individual minds, something like the contents of mental filing cabinets. (Bereiter 2002, p. 179) Thus there is a focus on ‘adding more substance’ to the mind. This is the ‘folk theory’ of learning (e.g. Bereiter 2002). It emphasises the products of learning. At this stage it might be objected that the educationally sophisticated have long ago moved beyond viewing learning as a product. Certainly, as shown later in this paper, the educational arguments for an alternative view have been persuasive for quite some time now. Nevertheless, much educational policy and practice, including policies and practices that directly impact on the emerging interest in learning at work, are clearly rooted in the learning as product view. For instance, typical policy documents relating to CompetencyBased Training view work performance as a series of decontextualised atomic elements, which novice workers are thought of as needing to pick up one by one. Once a discrete element is acquired, transfer or application to appropriate future circumstances by the learner is assumed to be unproblematic. This is a pure learning as product approach. Similarly, policy documents on generic skills (core or basic skills) typically reflect similar assumptions. Putative generic skills, such as communication and problem solving, are presented as discrete, decontextualised elements that, once acquired, can simply be transferred to diverse situations. Certainly, in literature emanating from employer groups, this assumption is endemic. These, then, are two policy areas that are closely linked to learning at work that are dominated by learning as product assumptions. Of course, Lyotard (1984) and other postmodern writers (e.g. Usher & Edwards 1994) have argued that the recent neo-liberal marketisation of education results in a commodification of knowledge, in which knowledge is equated with information. Such information can, for instance, be readily stored and transmitted via microelectronic technology. Students become consumers of educational commodities. All of this is grist to the learning as product mill. However, it needs to be emphasised that learning as product was the dominant mindset long before the rise of neo-liberal marketisation of education. This is reflected in standard international educational nomenclature: acquisition of content, transfer of learning, delivery of courses, course providers, course offerings, course load, ", "title": "" }, { "docid": "162f080444935117c5125ae8b7c3d51e", "text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1", "title": "" }, { "docid": "ab9d4f991cf6fa1c6ecf4f2a7573cff1", "text": "Over the last decade, much research has been conducted in the field of human resource management (HRM) and its associations with firm performance. Prior studies have found substantial positive evidence for statistical associations between HRM practices and improved firm performance. The purpose of this study is to investigate the relationships between HRM practices and firm performance with business strategy and environmental uncertainty as moderators. This study examines the relationships among HRM practices, environmental uncertainty, business strategy and firm performance. It was hypothesized that HRM practices could positively influenced profitability and growth and negatively influenced employee turnover. Data were collected using mail questionnaire sent to human resource managers in manufacturing firms in Malaysia. A total of 162 useable responses were obtained and used for the purpose of analysis. Results of hierarchical regression used to test the relationships among the variables indicated that (1) human resource planning has a relationship with profitability and growth; (2) performance-based pay has a relationship with profitability and growth; (3) skills development has a relationship with involuntary employee turnover; (4) environmental uncertainty as a moderator influence the relationship between human resource planning, performance-based pay and profitability; and (5) business strategy as a moderator influences the relationship between performance-based pay and growth. The findings can form the basis for useful recommendations for Malaysian managers in encouraging the practice of human resource management and for employees who are", "title": "" } ]
scidocsrr
3712cd09117572df13f028a7163e7093
Cross-Language Authorship Attribution
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "b0991cd60b3e94c0ed3afede89e13f36", "text": "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.", "title": "" } ]
[ { "docid": "f5bc721d2b63912307c4ad04fb78dd2c", "text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even", "title": "" }, { "docid": "52c7ac92b5da3b37e3d657afa3e06377", "text": "Research on implicit cognition and addiction has expanded greatly during the past decade. This research area provides new ways to understand why people engage in behaviors that they know are harmful or counterproductive in the long run. Implicit cognition takes a different view from traditional cognitive approaches to addiction by assuming that behavior is often not a result of a reflective decision that takes into account the pros and cons known by the individual. Instead of a cognitive algebra integrating many cognitions relevant to choice, implicit cognition assumes that the influential cognitions are the ones that are spontaneously activated during critical decision points. This selective review highlights many of the consistent findings supporting predictive effects of implicit cognition on substance use and abuse in adolescents and adults; reveals a recent integration with dual-process models; outlines the rapid evolution of different measurement tools; and introduces new routes for intervention.", "title": "" }, { "docid": "58c0456c8ae9045898aca67de9954659", "text": "Channel sensing and spectrum allocation has long been of interest as a prospective addition to cognitive radios for wireless communications systems occupying license-free bands. Conventional approaches to cyclic spectral analysis have been proposed as a method for classifying signals for applications where the carrier frequency and bandwidths are unknown, but is, however, computationally complex and requires a significant amount of observation time for adequate performance. Neural networks have been used for signal classification, but only for situations where the baseband signal is present. By combining these techniques a more efficient and reliable classifier can be developed where a significant amount of processing is performed offline, thus reducing online computation. In this paper we take a renewed look at signal classification using spectral coherence and neural networks, the performance of which is characterized by Monte Carlo simulations", "title": "" }, { "docid": "377aec61877995ad2b677160fa43fefb", "text": "One of the major issues involved with communication is acoustic echo, which is actually a delayed version of sound reflected back to the source of sound hampering communication. Cancellation of these involve the use of acoustic echo cancellers involving adaptive filters governed by adaptive algorithms. This paper presents a review of some of the algorithms of acoustic echo cancellation covering their merits and demerits. Various algorithms like LMS, NLMS, FLMS, LLMS, RLS, AFA, LMF have been discussed. Keywords— Adaptive Filter, Acoustic Echo, LMS, NLMS, FX-LMS, AAF, LLMS, RLS.", "title": "" }, { "docid": "d26091934bbc0192735e056cf150fc31", "text": "An Approximate Minimum Degree ordering algorithm (AMD) for preordering a symmetric sparse matrix prior to numerical factorization is presented. We use techniques based on the quotient graph for matrix factorization that allow us to obtain computationally cheap bounds for the minimum degree. We show that these bounds are often equal to the actual degree. The resulting algorithm is typically much faster than previous minimum degree ordering algorithms, and produces results that are comparable in quality with the best orderings from other minimum degree algorithms. ENSEEIHT-IRIT, Toulouse, France. email: amestoy@enseeiht.fr. Computer and Information Sciences Department University of Florida, Gainesville, Florida, USA. phone: (904) 392-1481, email: davis@cis.ufl.edu. Technical reports and matrices are available via the World Wide Web at http://www.cis.ufl.edu/̃ davis, or by anonymous ftp at ftp.cis.ufl.edu:cis/tech-reports. Support for this project was provided by the National Science Foundation (ASC-9111263 and DMS-9223088). Portions of this work were supported by a post-doctoral grant from CERFACS. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon. 0X11 0QX England, and European Center for Research and Advanced Training in Scientific Computation (CERFACS), Toulouse, France. email: isd@letterbox.rl.ac.uk. Technical reports, information on the Harwell Subroutine Library, and matrices are available via the World Wide Web at http://www.cis.rl.ac.uk/struct/ARCD/NUM.html, or by anonymous ftp at seamus.cc.rl.ac.uk/pub.", "title": "" }, { "docid": "8e8ba9e3178d6f586f8d551b4ba52851", "text": "Fake news, one of the biggest new-age problems has the potential to mould opinions and influence decisions. The proliferation of fake news on social media and Internet is deceiving people to an extent which needs to be stopped. The existing systems are inefficient in giving a precise statistical rating for any given news claim. Also, the restrictions on input and category of news make it less varied. This paper proposes a system that classifies unreliable news into different categories after computing an F-score. This system aims to use various NLP and classification techniques to help achieve maximum accuracy.", "title": "" }, { "docid": "89d05b1f40431af3cc6e2a8e71880e6f", "text": "Many test series have been developed to assess dog temperament and aggressive behavior, but most of them have been criticized for their relatively low predictive validity or being too long, stressful, and/or problematic to carry out. We aimed to develop a short and effective series of tests that corresponds with (a) the dog's bite history, and (b) owner evaluation of the dog's aggressive tendencies. Seventy-three pet dogs were divided into three groups by their biting history; non-biter, bit once, and multiple biter. All dogs were exposed to a short test series modeling five real-life situations: friendly greeting, take away bone, threatening approach, tug-of-war, and roll over. We found strong correlations between the in-test behavior and owner reports of dogs' aggressive tendencies towards strangers; however, the test results did not mirror the reported owner-directed aggressive tendencies. Three test situations (friendly greeting, take-away bone, threatening approach) proved to be effective in evoking specific behavioral differences according to dog biting history. Non-biters differed from biters, and there were also specific differences related to aggression and fear between the two biter groups. When a subsample of dogs was retested, the test revealed consistent results over time. We suggest that our test is adequate for a quick, general assessment of human-directed aggression in dogs, particularly to evaluate their tendency for aggressive behaviors towards strangers. Identifying important behavioral indicators of aggressive tendencies, this test can serve as a useful tool to study the genetic or neural correlates of human-directed aggression in dogs.", "title": "" }, { "docid": "5b148dd9f45a52d2961f348adf39e0ad", "text": "Research suggesting the beneficial effects of yoga on myriad aspects of psychological health has proliferated in recent years, yet there is currently no overarching framework by which to understand yoga's potential beneficial effects. Here we provide a theoretical framework and systems-based network model of yoga that focuses on integration of top-down and bottom-up forms of self-regulation. We begin by contextualizing yoga in historical and contemporary settings, and then detail how specific components of yoga practice may affect cognitive, emotional, behavioral, and autonomic output under stress through an emphasis on interoception and bottom-up input, resulting in physical and psychological health. The model describes yoga practice as a comprehensive skillset of synergistic process tools that facilitate bidirectional feedback and integration between high- and low-level brain networks, and afferent and re-afferent input from interoceptive processes (somatosensory, viscerosensory, chemosensory). From a predictive coding perspective we propose a shift to perceptual inference for stress modulation and optimal self-regulation. We describe how the processes that sub-serve self-regulation become more automatized and efficient over time and practice, requiring less effort to initiate when necessary and terminate more rapidly when no longer needed. To support our proposed model, we present the available evidence for yoga affecting self-regulatory pathways, integrating existing constructs from behavior theory and cognitive neuroscience with emerging yoga and meditation research. This paper is intended to guide future basic and clinical research, specifically targeting areas of development in the treatment of stress-mediated psychological disorders.", "title": "" }, { "docid": "4f8a52941e24de8ce82ba31cd3250deb", "text": "BACKGROUND\nThere is an increasing use of technology for teaching and learning in medical education but often the use of educational theory to inform the design is not made explicit. The educational theories, both normative and descriptive, used by medical educators determine how the technology is intended to facilitate learning and may explain why some interventions with technology may be less effective compared with others.\n\n\nAIMS\nThe aim of this study is to highlight the importance of medical educators making explicit the educational theories that inform their design of interventions using technology.\n\n\nMETHOD\nThe use of illustrative examples of the main educational theories to demonstrate the importance of theories informing the design of interventions using technology.\n\n\nRESULTS\nHighlights the use of educational theories for theory-based and realistic evaluations of the use of technology in medical education.\n\n\nCONCLUSION\nAn explicit description of the educational theories used to inform the design of an intervention with technology can provide potentially useful insights into why some interventions with technology are more effective than others. An explicit description is also an important aspect of the scholarship of using technology in medical education.", "title": "" }, { "docid": "5e3cbb89e7ba026d6f60a19aca8be4b8", "text": "This paper presents for the first time, the design of a dual band PIFA antenna for 5G applications on a low-cost substrate with smallest form factor and widest bandwidth in both bands (28 GHz and 38 GHz). The proposed dual band PIFA antenna consists of a shorted patch and a modified U-shaped slot in the patch. The antenna shows good matching at and around both center frequencies. The antenna shows clean radiation pattern and bandwidth of 3.34 GHz and 1.395 GHz and gain of 3.75 dBi and 5.06 dBi at 28 and 38 GHz respectively. This antenna has ultra-small form factor of 1.3 mm × 1.2 mm. Patch is shorted at one end with a metallic cylindrical via. A CPW line and a feeding via are used on the bottom side of the substrate to excite the PIFA antenna patterned on the top side of the substrate which also facilitate the measurements of the antenna at mm-wave frequencies. The antenna was designed on low cost Isola FR406 substrate.", "title": "" }, { "docid": "9435908ab7c10a858c223d3f08b87e74", "text": "The recent success of deep neural networks (DNNs) in speech recognition can be attributed largely to their ability to extract a specific form of high-level features from raw acoustic data for subsequent sequence classification or recognition tasks. Among the many possible forms of DNN features, what forms are more useful than others and how effective these DNN features are in connection with the different types of downstream sequence recognizers remained unexplored and are the focus of this paper. We report our recent work on the construction of a diverse set of DNN features, including the vectors extracted from the output layer and from various hidden layers in the DNN. We then apply these features as the inputs to four types of classifiers to carry out the identical sequence classification task of phone recognition. The experimental results show that the features derived from the top hidden layer of the DNN perform the best for all four classifiers, especially for the autoregressive-moving-average (ARMA) version of a recurrent neural network. The feature vector derived from the DNN's output layer performs slightly worse but better than any of the hidden layers in the DNN except the top one.", "title": "" }, { "docid": "417307155547a565d03d3f9c2a235b2e", "text": "Recent deep learning based methods have achieved the state-of-the-art performance for handwritten Chinese character recognition (HCCR) by learning discriminative representations directly from raw data. Nevertheless, we believe that the long-and-well investigated domain-specific knowledge should still help to boost the performance of HCCR. By integrating the traditional normalization-cooperated direction-decomposed feature map (directMap) with the deep convolutional neural network (convNet), we are able to obtain new highest accuracies for both online and offline HCCR on the ICDAR-2013 competition database. With this new framework, we can eliminate the needs for data augmentation and model ensemble, which are widely used in other systems to achieve their best results. This makes our framework to be efficient and effective for both training and testing. Furthermore, although directMap+convNet can achieve the best results and surpass human-level performance, we show that writer adaptation in this case is still effective. A new adaptation layer is proposed to reduce the mismatch between training and test data on a particular source layer. The adaptation process can be efficiently and effectively implemented in an unsupervised manner. By adding the adaptation layer into the pre-trained convNet, it can adapt to the new handwriting styles of particular writers, and the recognition accuracy can be further improved consistently and significantly. This paper gives an overview and comparison of recent deep learning based approaches for HCCR, and also sets new benchmarks for both online and offline HCCR.", "title": "" }, { "docid": "8f0ed599cec42faa0928a0931ee77b28", "text": "This paper describes the Connector and Acceptor patterns. The intent of these patterns is to decouple the active and passive connection roles, respectively, from the tasks a communication service performs once connections are established. Common examples of communication services that utilize these patterns include WWW browsers, WWW servers, object request brokers, and “superservers” that provide services like remote login and file transfer to client applications. This paper illustrates how the Connector and Acceptor patterns can help decouple the connection-related processing from the service processing, thereby yielding more reusable, extensible, and efficient communication software. When used in conjunction with related patterns like the Reactor [1], Active Object [2], and Service Configurator [3], the Acceptor and Connector patterns enable the creation of highly extensible and efficient communication software frameworks [4] and applications [5]. This paper is organized as follows: Section 2 outlines background information on networking and communication protocols necessary to appreciate the patterns in this paper; Section 3 motivates the need for the Acceptor and Connector patterns and illustrates how they have been applied to a production application-level Gateway; Section 4 describes the Acceptor and Connector patterns in detail; and Section 5 presents concluding remarks.", "title": "" }, { "docid": "2922158c41eed229f4beeb2ea130c108", "text": "Automatically generating captions of an image is a fundamental problem in computer vision and natural language processing, which translates the content of the image into natural language with correct grammar and structure. Attention-based model has been widely adopted for captioning tasks. Most attention models generate only single certain attention heat map for indicating eyes where to see. However, these models ignore the endogenous orienting which depends on the interests, goals or desires of the observers, and constrain the diversity of captions. To improve both the accuracy and diversity of the generated sentences, we present a novel endogenous–exogenous attention architecture to capture both the endogenous attention, which indicates stochastic visual orienting, and the exogenous attention, which indicates deterministic visual orienting. At each time step, our model generates two attention maps, endogenous heat map and exogenous heat map, and then fuses them into hidden state of LSTM for sequential word generation. We evaluate our model on the Flickr30k and MSCOCO datasets, and experiments show the accuracy of the model and the diversity of captions it learns. Our model achieves better performance over state-of-the-art methods.", "title": "" }, { "docid": "e4c27a97a355543cf113a16bcd28ca50", "text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.", "title": "" }, { "docid": "44750e99b005ccf18b221576fa7304e7", "text": "Due to the diversity of natural language processing (NLP) tools and resources, combining them into processing pipelines is an important issue, and sharing these pipelines with others remains a problem. We present DKPro Core, a broad-coverage component collection integrating a wide range of third-party NLP tools and making them interoperable. Contrary to other recent endeavors that rely heavily on web services, our collection consists only of portable components distributed via a repository, making it particularly interesting with respect to sharing pipelines with other researchers, embedding NLP pipelines in applications, and the use on high-performance computing clusters. Our collection is augmented by a novel concept for automatically selecting and acquiring resources required by the components at runtime from a repository. Based on these contributions, we demonstrate a way to describe a pipeline such that all required software and resources can be automatically obtained, making it easy to share it with others, e.g. in order to reproduce results or as examples in teaching, documentation, or publications.", "title": "" }, { "docid": "2b595cab271cac15ea165e46459d6923", "text": "Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system. A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29%. and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.", "title": "" }, { "docid": "4df6bbfaa8842d88df0b916946c59ea3", "text": "Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. \n We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.", "title": "" }, { "docid": "5a85db36e049c371f0b0e689e7e73d4a", "text": "Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm.While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore the systems-level challenges in achieving scalable, fault-tolerant quantum computation. In this lecture,we provide an engineering-oriented introduction to quantum computation with an overview of the theory behind key quantum algorithms. Next, we look at architectural case studies based upon experimental data and future projections for quantum computation implemented using trapped ions. While we focus here on architectures targeted for realization using trapped ions, the techniques for quantum computer architecture design, quantum fault-tolerance, and compilation described in this lecture are applicable to many other physical technologies that may be viable candidates for building a large-scale quantum computing system. We also discuss general issues involved with programming a quantum computer as well as a discussion of work on quantum architectures based on quantum teleportation. Finally, we consider some of the open issues remaining in the design of quantum computers.", "title": "" }, { "docid": "66154317ab348562536ab44fa94d2520", "text": "We describe a prototype dialogue response generation model for the customer service domain at Amazon. The model, which is trained in a weakly supervised fashion, measures the similarity between customer questions and agent answers using a dual encoder network, a Siamese-like neural network architecture. Answer templates are extracted from embeddings derived from past agent answers, without turn-by-turn annotations. Responses to customer inquiries are generated by selecting the best template from the final set of templates. We show that, in a closed domain like customer service, the selected templates cover >70% of past customer inquiries. Furthermore, the relevance of the model-selected templates is significantly higher than templates selected by a standard tf-idf baseline.", "title": "" } ]
scidocsrr